entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
11
164
authors
sequencelengths
1
664
primary_category
stringclasses
116 values
categories
sequencelengths
1
7
text
stringlengths
5
1.05M
http://arxiv.org/abs/2407.03172v1
20240703144718
IMC 2024 Methods & Solutions Review
[ "Shyam Gupta", "Dhanisha Sharma", "Songling Huang" ]
cs.CV
[ "cs.CV", "cs.AI", "stat.AP" ]
IMC 2024 Methods & Solutions Review 1st Shyam Gupta Technicshe Universitat Dortmund Master Student Department of Statistics Dortmund,Germany shyam.gupta@tu-dortmund.de 2nd Dhanisha Sharma B.Sc Physics (Honors) DAVV (2024) Indore, India dhanisha522292@gmail.com 3rd Songling Huang College of Big Data Yunnan Agricultural University Yunnan, China hslingskr@163.com July 8, 2024 ======================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT For the past three years, Kaggle has been hosting the Image Matching Challenge, which focuses on solving a 3D image reconstruction problem using a collection of 2D images. Each year, this competition fosters the development of innovative and effective methodologies by its participants. In this paper, we introduce an advanced ensemble technique that we developed, achieving a score of 0.153449 on the private leaderboard and securing the 160th position out of over 1,000 participants. Additionally, we conduct a comprehensive review of existing methods and techniques employed by top-performing teams in the competition. Our solution, alongside the insights gathered from other leading approaches, contributes to the ongoing advancement in the field of 3D image reconstruction. This research provides valuable knowledge for future participants and researchers aiming to excel in similar image matching and reconstruction challenges. 3d scene reconstruction, ALIKED, descriptors, SIFT, lightglue, keypoints, COLMAP, image pairs, SFM, attention, descriptors. § INTRODUCTION The process of reconstructing 3D models from diverse image collections, known as Structure from Motion (SfM)<cit.>, is critical in Computer Vision but remains challenging, especially with images captured under varied conditions like different viewpoints, lighting, and occlusions. This competition <cit.> addresses these complexities across six distinct categories: * Phototourism and historical preservation: Includes diverse viewpoints, sensor variations, and challenges posed by ancient historical sites. * Night vs. day and temporal changes: Combines images from different times, lighting conditions, and weather, testing algorithms against temporal variations. * Aerial and mixed aerial-ground: Involves images from drones with arbitrary orientations, alongside ground-level shots. * Repeated structures: Focuses on disambiguating perspectives of symmetrical objects. * Natural environments: Challenges include irregular structures like trees and foliage. * Transparencies and reflections: Deals with objects like glassware that lack texture and create reflections, presenting unique computational hurdles. The competition aims to advance understanding in Computer Vision by bridging traditional image-matching techniques with modern machine learning approaches. By tackling these varied categories, participants contributed to evolving solutions for robust 3D reconstruction from real-world image datasets. § EXISTING METHODS & TECHNIQUES Every solution has a unique way of solving the problem. However, there is a standard flow of data most of the solutions followed. Following this pipeline, will not result in top ranks itself. Hence, below we mention methods used by top solutions to score higher. In further sections, we discuss how kagglers made a mix'n'match of these techniques to get the best performance. § MATCHFORMER <CIT.> MatchFormer(2022) was a novel approach to matching multiple views of a scene, crucial for tasks like Structure-from-Motion (SfM), Simultaneous Localization and Mapping (SLAM), relative pose estimation, and visual localization. Traditional methods using detectors and hand-crafted local features are computationally heavy. Recent deep learning methods use Convolutional Neural Networks (CNNs) for feature extraction but are often inefficient due to overburdened decoders. MatchFormer proposed a new pipeline called extract-and-match, which uses a pure transformer model to perform feature extraction and matching simultaneously. This approach is more intuitive and efficient compared to previous methods. MatchFormer introduces a hierarchical transformer with a matching-aware encoder that uses interleaved self- and cross-attention mechanisms. This design improves computational efficiency and robustness, especially in low-texture scenes. § DINOV2<CIT.> DINOv2 (Distillation of Self-supervised Vision Transformers) enhances segmentation, keypoint detection, and extraction, making it valuable for image matching and 3D reconstruction. DINOv2 leverages self-supervised learning to train vision transformers without labeled data, enabling the model to learn robust and detailed image representations. For segmentation, DINOv2 uses its learned feature maps to identify and delineate different regions within an image accurately. This segmentation capability is crucial in breaking down complex scenes into manageable parts, aiding in precise object recognition and separation, which is foundational for subsequent processing steps. In keypoint detection and extraction, DINOv2's ability to generate high-quality feature descriptors ensures that keypoints are distinctive and repeatable. These descriptors are pivotal in matching corresponding points across different images, a core requirement for image matching. The robustness of these keypoints helps in achieving higher accuracy in image alignment and feature matching, which directly impacts the quality of 3D reconstruction. By integrating DINOv2, image matching algorithms benefit from enhanced feature extraction, leading to more reliable keypoint matches. This improved matching process is essential for constructing accurate 3D models, as it ensures that the spatial relationships between points are preserved across multiple views, resulting in more detailed and accurate 3D reconstructions. § ALIKED<CIT.> Efficiently and robustly extracting image keypoints and descriptors is essential for various visual measurement applications, including simultaneous localization and mapping (SLAM)<cit.>, computational photography, and visual place recognition. Traditional methods relied on hand-crafted algorithms, which were not very efficient or robust. Modern approaches use deep neural networks (DNNs) for better performance. Keypoints and Descriptors * Keypoints: Distinctive points in an image that are used for tasks like image matching and 3D reconstruction. * Descriptors: Descriptions of the keypoints that allow different keypoints to be compared and matched across images. Early DNN methods extracted descriptors at predefined keypoints, but newer methods use a single network to extract both keypoints and descriptors simultaneously. These newer methods generate a score map and a descriptor map from which keypoints and descriptors are extracted. Challenges with Existing Methods is Existing methods use fixed-size convolutions that lack geometric invariance, which is crucial for accurate image matching. This problem is partially solved by estimating the scale and orientation of descriptors. However, these methods can only handle affine transformations, not more complex geometric transformations. Deformable Convolution Networks (DCNs) can model any geometric transformation by adjusting each pixel's position in the convolution, enhancing descriptor representation. However, DCNs are computationally expensive. Sparse Deformable Descriptor Head (SDDH) are used to improve efficiency, the paper introduces the Sparse Deformable Descriptor Head (SDDH): * SDDH: Extracts deformable descriptors only at detected keypoints instead of the entire image, significantly reducing computational costs. It uses adjustable positions (offsets) for better flexibility and efficiency in modeling descriptors. * ALIKED: A network designed for visual measurement tasks that includes a solution to adapt the neural reprojection error (NRE) loss for sparse descriptors. This adaptation minimizes computational overhead and saves memory during training. § DENSE MATCHERS AND SPARSE KEYPOINT MATCHERS Dense Matchers aim to find correspondences for every pixel or a dense grid of pixels in an image. This comprehensive approach is used in tasks where fine-grained details are important, such as optical flow estimation, depth estimation, and image stitching. * Full Coverage: Consider the entire image, ensuring correspondences for almost every pixel. * High Computational Cost: Require significant computational resources and memory. * Applications: Motion tracking, 3D reconstruction, dense image alignment. Sparse Keypoint Matchers, in contrast, detect and match distinct and repeatable keypoints or features in images. These keypoints are typically corners, edges, or blobs identifiable across different views. * Selective Coverage: Only a subset of points (keypoints) in the image is considered. * Lower Computational Cost: Faster and less computationally demanding than dense matchers. * Applications: Object recognition, image retrieval, feature-based 3D reconstruction. Comparison: * Coverage: Dense matchers cover the entire image, while sparse keypoint matchers focus on specific, informative points. We leverage Sparse Matchers for this competition since for SFM we focus on specific object and reject noise. * Computational Efficiency: Sparse keypoint matchers are more efficient computationally. Which proves to be more advantage, since we have 9 hours of runtime limit on Kaggle. * Applications: Dense matchers are suitable for detailed correspondences, while sparse matchers are ideal for tasks relying on robust and distinctive features. Below we discuss some keypoint matching algorithms, specially LightGlue<cit.> & OmniGlue<cit.> since it proved to be most robust and efficient algorithm. § LIGHTGLUE<CIT.> LightGlue is a deep network designed to efficiently and accurately match sparse points between two images. It improves upon SuperGlue by addressing computational limitations while retaining high performance. LightGlue is adaptive, making it faster for easy-to-match image pairs and robust for challenging ones. It is more efficient, easier to train, and suitable for low-latency applications like SLAM<cit.> and large-scale mapping. § COMPLEX DENSE KEYPOINT METHODS §.§ LoFTR (Local Feature TRansformer) LoFTR provides dense correspondences between images without needing descriptors. It uses a transformer-based architecture to establish correspondences directly from image patches. * Transformer Layers: Utilizes multi-head self-attention to relate features across the entire image. * High Computational Cost: Despite its accuracy, the dense matching process is computationally intensive. §.§ SuperGlue<cit.> SuperGlue matches keypoints by considering the entire context of both images simultaneously using a graph neural network with attention mechanisms. * Graph Neural Network: Models relationships between keypoints across images. * Transformer-Based: Uses self and cross-attention to enhance matching robustness. * Training and Computation: Requires significant computational resources for training and inference. §.§ Why LightGlue is Preferred §.§ Efficiency * Adaptive Matching: LightGlue adjusts its computational effort based on the difficulty of the image pair, making it faster for easy matches. * Early Discarding: Discards non-matchable points early, reducing unnecessary computations. While LOFTR (Learning to Optimize Frameworks)<cit.> and Superglue have made significant strides in multimodal research, they do possess certain drawbacks when compared to LightGlue. In summary, while LOFTR and Superglue have advanced multimodal research, their drawbacks in terms of complexity, computational requirements, and generalization challenges highlight the potential advantages of LightGlue's approach in certain applications. § COLMAP <CIT.><CIT.> COLMAP (Construction and Localization MAPping) is a versatile and widely-used photogrammetry software designed for 3D reconstruction and structure-from-motion (SfM) tasks. It provides a suite of tools for processing images to create 3D models by detecting, describing, and matching keypoints across images, and then using these matches to estimate the 3D structure and camera positions. Key features of COLMAP include: * Feature Extraction and Matching: Uses algorithms like SIFT for detecting and matching keypoints across multiple images. * Structure-from-Motion (SfM): Estimates camera poses and reconstructs sparse 3D points. * Multi-View Stereo (MVS): Generates dense 3D models by computing depth maps and fusing them into a consistent 3D reconstruction. * Scalability: Efficiently handles large datasets with thousands of images. * User Interface: Provides a graphical interface for easy interaction and visualization, along with command-line tools for automation. COLMAP is favored for its robustness, accuracy, and ease of use, making it suitable for applications in archaeology, architecture, cultural heritage preservation, and more. § OMNIGLUE<CIT.> OmniGlue addresses the generalization limitations of current learnable image matchers, which typically excel in specific domains with abundant training data but falter in diverse, unseen domains. Traditional methods like SIFT, despite being hand-crafted, often outperform these advanced models in unfamiliar contexts due to their domain-agnostic nature. OmniGlue introduces two key innovations to enhance generalizability: * Foundation Model Guidance: Utilizes the broad visual knowledge of large pre-trained models like DINOv2<cit.> to guide the matching process, enhancing performance in domains not covered during training. * Keypoint-Position Guided Attention: Disentangles positional encoding from matching descriptors, avoiding over-reliance on geometric priors from training data, thereby improving cross-domain performance. Experimental results demonstrate OmniGlue's superior generalization across various domains, including synthetic and real images, scene-level to object-centric, and aerial datasets. Key contributions include: * Enhanced pose estimation accuracy by leveraging foundation model guidance. * Improved cross-domain transferability through innovative positional encoding strategies. * Significant performance gains across diverse datasets, showcasing OmniGlue's robust generalization capabilities. * Ease of adaptation to new domains with minimal fine-tuning data. §.§ Comparison with SuperGlue, LightGlue, and LOFTR SuperGlue<cit.> is a prominent learnable image matcher that uses attention mechanisms to perform intra- and inter-image keypoint feature propagation, typically leveraging SuperPoint for keypoint detection. While it demonstrates high performance in specific domains, its generalization to unseen domains is limited due to entanglement of local descriptors with positional information, leading to over-specialization. LightGlue<cit.> emphasizes lightweight and efficient multimodal fusion, making it suitable for resource-constrained environments or real-time applications. By focusing on simplicity and efficiency, it addresses computational and data requirement issues but may not achieve the same level of performance on diverse datasets as more complex models like OmniGlue. LOFTR<cit.> (Learning-based Optical Flow with Transformers) employs a coarse-to-fine correspondence prediction paradigm, excelling in dense image matching. However, like other dense matchers, LOFTR struggles with generalization across diverse domains due to its heavy reliance on domain-specific data and computational intensity. OmniGlue<cit.>, compared to SuperGlue, LightGlue, and LOFTR, stands out in its generalization capability. By leveraging foundation model guidance and novel keypoint-position attention mechanisms, OmniGlue significantly improves performance in unseen domains while maintaining high accuracy in the training domain. This makes it a more versatile and robust solution for a wide range of image matching tasks, addressing the limitations observed in its predecessors. § ABBREVIATIONS AND ACRONYMS * Correspondences: Points in one image that match points in another image, allowing the images to be aligned. * Sparse Interest Points: Keypoints in an image that are distinctive and used for matching across images. * High-Dimensional Representations: Numerical descriptions of keypoints that capture their local visual appearance. * Robustness: The ability to handle variations in viewpoint, lighting, and other changes. * Uniqueness: The ability to discriminate between different points to avoid false matches. * Transformer Model: A type of neural network architecture that uses self-attention mechanisms to process input data. * Pareto-Optimal: A state where no criterion (like efficiency or accuracy) can be improved without worsening another. * Simultaneous Localization and Mapping (SLAM): A technique used in robotics and computer vision to create a map of an environment while simultaneously keeping track of the device’s location within that environment. * Self-attention: A mechanism in neural networks where each element of a sequence pays attention to other elements to understand its context better. * Cross-attention: A mechanism where elements of one sequence pay attention to elements of another sequence, useful in tasks like machine translation and feature matching. * Positional Patch Embedding (PosPE): A method to incorporate positional information into patches of an image to improve feature detection. * Geometric Invariance: The ability of a method to handle various transformations (like rotation, scaling) in the input data. * Deformable Convolution Network (DCN): A type of neural network that can adjust the position of each pixel in the convolution, allowing it to model more complex transformations. * Neural Reprojection Error (NRE) Loss: A loss function used to measure the difference between predicted and actual keypoint locations in image matching tasks. * Affine Transformations: Transformations that include scaling, rotation, and translation. * Specularities: Bright spots of light that appear on shiny surfaces when they reflect light sources. These can create difficulties in image matching and 3D reconstruction. § OUR SOLUTION Our solution aims to implement a complex pipeline of image feature extraction, matching, and 3D reconstruction, integrating a variety of advanced image processing and 3D reconstruction tools. Our solution progressed in 3 steps as follows: * The get_keypoints method uses a deep learning model (such as LoFTR) to extract key points from the image. Then, the matches_merger method and the keypoints_merger method are used to merge the key points from different images into a unified dataset to ensure the uniqueness and consistency of the key points. * The wrapper_keypoints method and the reconstruct_from_db method use COLMAP to perform 3D reconstruction from the key points and matching data in the database to obtain the camera pose. * Finally, the create_submission method generates a submission file and formats the output results to participate in a specific challenge or competition. The entire pipeline achieves efficient and accurate image processing through accurate feature point extraction, reliable matching filtering, and efficient 3D reconstruction, which is suitable for application scenarios such as autonomous driving, robot navigation, and virtual reality that require sophisticated image processing and 3D reconstruction. §.§ What made the difference? Here are a few points we missed on. In Winner's solutions you can observe, they detected these edge cases and solved them efficiently with novel methods, which resulted them better scores. * Not correcting image orientation place a significant role. since the algorithms we used are not designed for affine transformations neither they are scale & rotation invariant. * We did not solve for transparent & Low light images. * We should have used an ensemble of aliked+lightglue<cit.><cit.> for key points detection and feature extraction. If we would have done these changes. We could have scored higher. § TOP SOLUTIONS We have summarized most of the terms and latest research you shold have known for a successful submission in the competition. Top Medal Baggers in Kaggle mostly used permutation & combination of these techniques to get best scores. § 1ST PLACE SOLUTION The final solution combined 3D image reconstruction (I3DR) with COLMAP<cit.> for non-transparent scenes and direct image pose estimation (DIP) for transparent scenes. They used an ensemble of ALIKED extractors and LightGlue matchers, cross-validation, multi-GPU acceleration, and a new cropping method. Integration of OmniGlue enhanced match accuracy, and multiple reconstructions were merged for robust results. §.§ Solving Transparent Surface Keypoint Matching The solution started with performing orientation correction<cit.>. For detecting keypoints they used ALIKED + LightGlue, However faced with a problem of keypoint detection on transparent surfaces.This was a problem which was faced by most of the kagglers. These top solutions discuss and address such problems in detail. The initial SfM pipeline with COLMAP did not work with transparent scenes. To address this, they experimented with different strategies, hypothesizing that direct pose estimation might help compute the rotation matrix. They assumed cameras were positioned close to the object, capturing it from all sides. §.§ Approach #1 They placed cameras in a circle around the object and sorted images using several methods: * Optical Flow: Calculated magnitude for each image pair and assigned a weight equal to the standard deviation of the magnitude. * Pixel-level Difference: Simple grayscale difference with weights based on the difference value. * SSIM Score: Calculated SSIM index for each pair, assigning a weight of 1 - SSIM. * ALIKED+LG Matching: Number of matches for each pair with weights as 1 / num_matches. They built a distance matrix from these weights and solved the ordering problem using the Travelling Salesman Problem (TSP). §.§ Approach #2 Estimated image order by matching images at high resolution (4096px). The number of matches was higher for consecutive images, using a kNN-like method for estimation. After this performing reconstruction using COLMAP to get rotation and translation matrix made them rank on TOP of the table, helping them score 0.28, resulting in a gold medal. § 2ND PLACE SOLUTION The second-place solution for IMC 2024 devised separate strategies for conventional and transparent scenes due to their distinct characteristics, which were identified through extensive trials. § PREPROCESSING 1. Rotation Detection: Utilized a rotation detection model to predict and correct image rotations. Retained original rotations if less than 10% of images were predicted as rotated, acknowledging the model's potential inaccuracies.<cit.> 2. Shared Camera Intrinsics: If image dimensions were identical, set all cameras to share the same internal parameters, occasionally improving results by 0.01. 3. Transparency Detection: Calculated the average difference between images to classify scenes as transparent or not, enabling separate handling for each type. § MODELING TECHNIQUES 1. Global Features: Developed a robust global feature descriptor combining point and patch features. Extracted point features (ALIKED) and patch features (DINO), establishing one-to-one correspondence based on spatial relationships. Used clustering and the VLAD algorithm to generate global descriptors. This method outperformed existing techniques (NetVLAD, AnyLoc, DINO, SALAD) on VPR-related datasets. 2. Local Features: Utilized three types of local features: Dedode v2 + Dual Softmax, DISK + LightGlue, and SIFT + Nearest Neighbor. The Dedode v2 detector produced rich and evenly distributed feature points. The G-upright descriptor and dual softmax matcher were selected for this purpose. 3. MST-Aided Coarse-to-Fine SfM Solution: * Constructed a similarity graph with images as vertices and similarities as edges. Computed the Minimum Spanning Tree (MST) to obtain an optimal data association, used for the initial SfM. This stage focused on removing incorrect associations and improving coarse-grained accuracy. * Utilized full data associations and the coarse model from Stage 1 to provide initial camera pose priors for geometric verification. This filtered out incorrect feature matches, maintaining coarse-grained advantages while improving fine-grained accuracy. 4. Post-Processing: Employed pixsfm to optimize the SfM model. Deployed an HLoc-based relocalization module to process unregistered images, typically resulting in a 0-0.01 score improvement. Handling Transparent Scenes:Explored various local features (ALIKED, DISK, LoFTR, DKMV3), but none were satisfactory. Many Kagglers dealt with the problem by separately handling transparent and conventional scenes with tailored preprocessing and modeling techniques, this solution achieved significant accuracy improvements in both types of scenes. Thereby, resulting them in 2nd place gold. § 3RD PLACE SOLUTION The third-place solution for IMC 2024, VGGSfM, is a structure-from-motion method based on Visual Geometry Grounded Deep Structure From Motion, which was enhanced for this competition. The approach involved several key strategies: VGGSfM Across All Frames: Applied VGGSfM to all input frames, improving mAA by 4% compared to the baseline. However, due to GPU memory limitations on Kaggle servers, this method had to be integrated into the existing pycolmap pipeline. * Additional Tracks: Utilized VGGSfM's track predictor to estimate 2D matches and fed them into pycolmap. Nearest frames were identified using NetVLAD or DINO V2. This approach improved mAA by 3% on the evaluation set and public leaderboard score from  0.17 to  0.18. * SfM Track Refinement: Enhanced tracks predicted by pycolmap with VGGSfM's fine track predictor. After running the baseline with ALIKED+LightGlue, 3D points were refined and updated. A global bundle adjustment further optimized camera and point positions, improving mean reprojection error from 0.64 to 0.55 and leaderboard score from  0.18 to  0.20. * Relocating Missing Images: Used VGGSfM to identify and relocate missing images in the scene. This process aligned camera poses and improved the leaderboard score from  0.20 to  0.21. Final Solution: which led to 3rd place. * Handled image rotation to maximize matches using a pre-trained model. * Used matches from both ALIKED+LightGlue and SP+LightGlue. * For transparent images, extracted an area of interest using DBSCAN on keypoints and ran keypoint detection on these areas again. § 4TH PLACE SOLUTION § HANDLING TRANSPARENT IMAGES §.§ Foreground Segmentation * Observation: Many keypoints in transparent scenes appeared in the background, disrupting camera pose estimation. * Solution: Employed the DINOv2 Segmenter, identifying foreground objects as class5 (“bottle” in VOC2012). This allowed high-precision segmentation by focusing on transparent objects. * Keypoint Detection: Detected keypoints at the original image scale (1024x1024 grid units) without resizing, which was efficient given the uniform image size. Keypoints were detected only in the foreground area using the segmented results from DINOv2. While looking at accuracte foreground seperation for 4th solution, one should also see this figure  <ref> for looking how the method is employed. §.§ Feature Matching * Strategy: Limited the search for matches to corresponding grids during keypoint detection, significantly reducing the search range and focusing on relevant areas, improving matching efficiency and accuracy. § LEVERAGING ALIKED AND LIGHTGLUE §.§ Non-Transparent Scenes * Keypoint Detection: Generated keypoints for images rotated in 90-degree increments using ALIKED-n16, retaining keypoints for each rotation. * Matching Stage: Utilized LightGlue to evaluate matches. For each fixed set of keypoints, evaluated matches with rotated counterparts, adopting the combination with the highest number of matches. This ensured robust matching regardless of image rotation. §.§ Additional Techniques §.§ Exhaustive Matching * Instead of searching for pairs using embedding-based similarity measures like DINOv2 or EfficientNet, exhaustive matching for all image pairs was performed, mitigating the risk of missing matches due to low similarity scores. §.§ Using All Images * By incorporating images beyond those listed in the submission file (up to 100 images for validation), the solution increased the number of triangulated points, enhancing 3D reconstruction accuracy. § RESULTS * Baseline: Private LB=0.149, Public LB=0.136 * Add Transparent Trick: Private LB=0.184, Public LB=0.171 * Add Exhaustive Matching: Private LB=0.186, Public LB=0.176 * Add All Images: Private LB=0.197, Public LB=0.194 These combined approaches led to a robust solution, achieving 4th place in the competition. § 5TH PLACE SOLUTION § BUILD LOCAL EVALUATION DATASETS To manage the large dataset realistically, three subsets were created using random sampling strategies. Results across these subsets showed high correlation, validating the chosen subset for local cross-validation (CV) reporting. § BUILD A GENERAL SFM PIPELINE The pipeline was divided into three modules: §.§ Proposing Pair Candidates by Global Descriptors Utilized pretrained models like EVA-CLIP Base, ConvNeXt Base, and Dinov2 ViT Base to extract global features. Customized similarity thresholds based on scene diversity (e.g., Lizard versus Cylinder) using cosine similarity. §.§ Matching Pairs in the Candidate List Focused on lightweight detector-based methods for image matching due to their efficiency. Experimented with SIFT + NN and LightGlue combinations, noting significant performance boosts in CV but slight drawbacks in LB performance. §.§ Reconstruction with Colmap Explored various approaches including single-camera usage and manual initial pair settings, aiming for improved reconstruction accuracy. Incremental mapping enhanced consistency in CV results but showed minimal impact on LB. § CUSTOMIZE THE PIPELINE FOR EACH SPECIFIC CATEGORY §.§ Transparent Objects Implemented segmentation models like MobileSAM for mask detection and keypoint extraction (e.g., ALIKED). Opted for the smallest mask encompassing most keypoints to focus solely on the object, significantly boosting CV performance with LightGlue. §.§ Finding the Best Pairs Implemented methods to find consecutive pairs efficiently. Used exhaustive matching for all pairs and built matrices based on match counts, achieving nearly 100% accuracy in pair selection. But for dark/dim images, I assume that these scenes may have plenty of false positive matches because of their natural properties, so they tried tuning parameters with much more strict values (e.g., increasing the matching threshold to 0.5, etc.). Nonetheless, it didn’t show any improvement. so they tried dopplegangers <cit.>. First, they run SfM one time to get all the matching pairs. then used the Doppelganger model to filter out pairs that have high probabilities as false positives (doppelgangers). It didn’t show any improvement on the church scene on the local CV. § RESULTS Leveraging these strategies, the solution consistently improved performance across categories, notably advancing in the LB rankings. Techniques tailored for specific challenges like transparent objects and day-night scenes contributed to the overall success. ALIKED+LG+rot+transparentcustom+tuning:0.195 Which made them rank 5th. § EVALUATION METRIC- MEAN AVERAGE ACCURACY (MAA) Submissions are evaluated based on the mean Average Accuracy (mAA) of the registered camera centers C = -R^T T. Given the set of cameras of a scene parameterized by their rotation matrices R and translation vectors T, and the hidden ground truth, the evaluation computes the best similarity transformation T (scale, rotation, and translation altogether) that registers the highest number of cameras onto the ground truth starting from triplets of corresponding camera centers. A camera is registered if C_g - T(C) < t, where C_g is the ground-truth camera center corresponding to C and t is a given threshold. Using a RANSAC-like approach, all possible (N choose 3) feasible similarity transformations T' derived by Horn's method on triplets of corresponding camera centers (C, C_g) are exhaustively verified. Here, N is the number of cameras in the scene. Each transformation T' is refined into T” by registering the camera centers again using Horn's method, incorporating previously registered cameras with the initial triplets. The best model T, among all T” with the highest number of registered cameras, is returned. § CONCLUSION Joining this competition for the first time came with a lot of learning and experiences. We learnt a lot about sfm problems, how edge cases can lead to poor reconstruction results, how using SOTA will not guarantee top score alone, which we saw in case of 2nd place solution. Special thanks to my co-authors for writing this competition review with me. plain 100 matchformer2022 Qing Wang and Jiaming Zhang and Kailun Yang and Kunyu Peng and Rainer Stiefelhagen 2022. MatchFormer: Interleaving Attention in Transformers for Feature Matching. https://doi.org/10.48550/arXiv.2203.09645MatchFormer. aliked Xiaoming Zhao and Xingming Wu and Weihai Chen and Peter C. Y. Chen and Qingsong Xu and Zhengguo Li. 2023. ALIKED: A Lighter Keypoint and Descriptor Extraction Network via Deformable Transformation. https://doi.org/10.48550/arXiv.2304.03608Aliked doppleganger Ruojin Cai and Joseph Tung and Qianqian Wang and Hadar Averbuch-Elor and Bharath Hariharan and Noah Snavely. 2023. Doppelgangers: Learning to Disambiguate Images of Similar Structures. https://doi.org/10.48550/arXiv.2309.02420Paper lightglue Philipp Lindenberger and Paul-Edouard Sarlin and Marc Pollefeys. LightGlue: Local Feature Matching at Light Speed.https://doi.org/10.48550/arXiv.2306.13643lightglue orientation Ternaus et. al. - Check Orientation rotation https://github.com/ternaus/check_orientationgithub slam Y. Ding, Z. Xiong, J. Xiong, Y. Cui and Z. Cao, "OGI-SLAM2: A Hybrid Map SLAM Framework Grounded in Inertial-Based SLAM," in IEEE Transactions on Instrumentation and Measurement https://ieeexplore.ieee.org/document/9903463ieee dinov2 Maxime Oquab and Timothée Darcet et. al. DINOv2: Learning Robust Visual Features without Supervision https://doi.org/10.48550/arXiv.2304.07193DinoV22 imc2024 Fabio Bellavia, Jiri Matas, Dmytro Mishkin, Luca Morelli, Fabio Remondino, Weiwei Sun, Amy Tabb, Eduard Trulls, Kwang Moo Yi, Sohier Dane, Ashley Chow 2024. Image Matching Challenge 2024 - Hexathlon https://www.kaggle.com/competitions/image-matching-challenge-2024/overviewCompetition sol1 IGOR LASHKOV, 1st Place Solution – High Image Resolution ALIKED/LightGlue + Transparent Trick. https://www.kaggle.com/competitions/image-matching-challenge-2024/discussion/5100841st Place sol2 neo, sunnyykk, gdchenhao, mayunchaoamap and wangshengyi96. 2nd Place Solution: MST-Aided SfM & Transparent Scene Solution.https://www.kaggle.com/competitions/image-matching-challenge-2024/discussion/5104992nd Place sol3 Jianyuan Wang, Nikita Karaev, Christian Rupprecht, David Novotny, 3rd Place Solution: VGGSfM https://doi.org/10.48550/arXiv.2312.045633rd Place Arxiv sol4 Tomoya Okazaki, 4th Place Solution: ALIKED+LightGlue is all you need.https://www.kaggle.com/competitions/image-matching-challenge-2024/discussion/510611 4th Place sol5 KHOA NGO, 5th Place Solution: Customized Scene Matching. https://www.kaggle.com/competitions/image-matching-challenge-2024/discussion/510603#29004585th Place loftr Sun, Jiaming and Shen, Zehong and Wang, Yuang and Bao, Hujun and Zhou, Xiaowei cvpr 2021. LOFTR Detector-Free Local Feature Matching with Transformers. https://doi.org/10.48550/arXiv.2104.00680Paper superglue Paul-Edouard Sarlin and Daniel DeTone and Tomasz Malisiewicz and Andrew Rabinovich 2010, SuperGlue: Learning Feature Matching with Graph Neural Networks. https://doi.org/10.48550/arXiv.1911.11763Paper omniglue Hanwen Jiang and Arjun Karpur and Bingyi Cao and Qixing Huang and Andre Araujo. 2024. OmniGlue: Generalizable Feature Matching with Foundation Model Guidance https://doi.org/10.48550/arXiv.2405.12979Paper sfm Johannes L. Schonberger, Jan-Michael Frahm 2016 Structure-from-Motion Revisited https://openaccess.thecvf.com/content_cvpr_2016/papers/Schonberger_Structure-From-Motion_Revisited_CVPR_2016_paper.pdfsfm reconstruction Dmytro Mishkin kaggle notebook https://www.kaggle.com/code/oldufo/colmap-3d-visualization-with-rerun-iokaggle-oldufo colmap colmap code https://github.com/colmap/colmap github
http://arxiv.org/abs/2407.02623v1
20240702192700
Uplifting Lower-Income Data: Strategies for Socioeconomic Perspective Shifts in Vision-Language Models
[ "Joan Nwatu", "Oana Ignat", "Rada Mihalcea" ]
cs.CY
[ "cs.CY", "cs.AI", "cs.CL", "cs.CV", "K.4; I.2.7; I.2.8" ]
Thouless pumping in a driven-dissipative Kerr resonator array J. Bloch July 8, 2024 ============================================================= § ABSTRACT To address this issue, we formulate translated non-English, geographic, and socioeconomic integrated prompts and evaluate their impact on VL model performance for data from different countries and income groups. Our findings show that geographic and socioeconomic integrated prompts improve VL performance on lower-income data and favor the retrieval of topic appearances commonly found in data from low-income households. From our analyses, we identify and highlight contexts where these strategies yield the most improvements. Our model analysis code is publicly available at https://github.com/Anniejoan/Uplifting-Lower-income-dataAnalysis for Uplifting lower-income data. § INTRODUCTION The overrepresentation of Western data and lack of diversity are common issues in many popular datasets <cit.>. Even though the size and quality of pre-training datasets greatly impact the performance of today's AI models, there is insufficient research attention given to this area. Furthermore, there is even less attention given to data curation methods that can improve representation in AI datasets <cit.>. Today, vision-language (VL) models are leveraged to filter uncurated data repositories into training datasets by assessing the association strength between images and text <cit.>. For example, Open AI's ViT-B/32  <cit.> was used to filter web-scraped images to create the LAION-5B dataset <cit.>. However, many VL models like CLIP have been shown to perform unequally for data from different cultures and socioeconomic classes <cit.>. Since datasets filtered using this technique reflect the VL model used for filtering <cit.>, this practice can exacerbate the lack of diversity in AI models and datasets. This is evident in <cit.>, which shows that data from the LAION-5B dataset is most similar to data from Western countries, like the United States and Canada, while it is dissimilar to data from many non-Western countries. To mitigate the lack of representation in AI, we address the issue of performance inequality in VL models <cit.> through prompting that leverages the cultural knowledge embedded in language <cit.>. Our goal is to improve the performance of VL models on the diverse representations of topic labels especially found in data from households with non-Western and lower socioeconomic status. Specifically, we pose several research questions to evaluate the effects of non-English languages and geographic and socioeconomic attribute-integrated prompts on retrieving diverse images. We focus on identifying which strategies improve performance on lower-income data, while also noting their effects on higher-income data. Our contributions are summarized as follows. First, we show that prompts translated to the native language (referred to as the non-English major language) of a country currently do not yield improved Recall performance on images from that country compared to English prompts. Second, by conducting an in-depth analysis of vision-language models' understanding of these attributes and their effects on Recall across data from different countries, we establish that geographic attribute and socioeconomic attribute integrated prompts improve VL performance on lower-income data, and identify contexts where these prompts work best. Third, we share insights from our analysis demonstrating how these attributes drive a perspective shift that benefits the retrieval of lower-income data. § RELATED WORK Addressing AI Performance Inequality. Class imbalances in training data contribute greatly to bias propagation in AI models <cit.>. This bias manifests in the disparate impact of these models on users in applications such as facial recognition <cit.>, healthcare <cit.>, and hiring <cit.>. Since creating balanced diverse datasets is difficult and expensive <cit.>, the research community has explored alternative methods of dealing with bias. Some of these methods involve pre-processing and in-processing techniques such as data augmentation, feature importance tuning, regularization, and adversarial training <cit.>. However, our work is most similar to post-processing techniques such as <cit.> that adjust model outcomes using a set of criteria to fit diversity standards across race, gender, and culture for the benefit of disadvantaged groups. <cit.> show that vision-language models consistently perform badly on data from lower socioeconomic status. Our analysis seeks to identify and analyze the effects of non-invasive post-processing techniques that mitigate this issue. Multilingual AI Models. Language is often referred to as a vehicle for propagating cultural knowledge and norms <cit.>. This is exemplified in AI research, where models often pick up on biases contained in the language of their training data <cit.>, and model outputs can be controlled by specifying a cultural shift in perspective <cit.> to improve diversity. Furthermore, <cit.> have shown that while LLMs and vision-language models contain a decent amount of cultural information regarding cultures present in English data (predominantly Western), for cultures contained in non-English language data, not as much cultural information can be extracted. Major reasons for this disparity include differences in the quantity and quality of training data available for these languages compared to English, information loss due to language translation, and model design decisions <cit.>. Similar to <cit.> and <cit.> that demonstrate how language can improve data diversity by retrieving images using translated captions, our work seeks to identify how current multilingual VL models and non-English languages can improve representation in vision-language models and datasets across regions and income groups. Prompting AI Models. Prompting techniques for large language models have been extensively studied in recent years. Both hard prompting <cit.> and soft prompting <cit.> are useful in adapting models for downstream tasks, instruction tuning, and value alignment. Prompts have also been used in vision-language models for similar purposes <cit.>. Most similar to our work, <cit.> incorporates geographic and physical attributes of objects into prompts to improve retrieval of diverse images. However, we extend the investigation to non-English language prompts and socioeconomic attributes and then perform further analysis to highlight insights into how VL models encode representations of different topics across not only regions but also socioeconomic status. § METHODOLOGY We apply three types of text prompting techniques to a geographically and socio-economically diverse dataset and analyze how these changes affect the performance of a multilingual VL model on data across different socio-economic groups, especially focusing on lower-income data. §.§ Dollar Street Dataset We use the Dollar Street <cit.>, which contains 38,479 images of household items (e.g., “stoves”, “cutlery”, “toothbrush”) spanning a large number of countries and several income levels. The dataset images were sourced from households in 63 countries on four continents (Africa, America, Asia, and Europe). The number of images ranges from 45 in Canada to 4,704 in India, with a median of 407 images per country. Size and image resolutions vary slightly across data from different regions; however, the mean and median image properties per region are relatively similar. Income Classes. We categorize the images and countries on Dollar Street based on their income information as described below. Image Income Classes. Each image is accompanied by the monthly household income value in U.S. dollars, calculated to reflect monthly consumption and adjusted for purchasing power parity to match the variance in cost of living across the different regions. The monthly income values range from 26.9$ to 19,671.0$. For fair comparison across bins, we group the images using the quartile binning method which splits the data into an approximately equal number of images per bin as shown in <cit.>. We group the images into four income classes (“poor”, “low-mid”, “up-mid”, and “rich” ) using quartiles as shown in <ref>. We further categorize the lowest two image income classes as lower-income images and the highest two income groups as higher-income images. Country Income Classes. We group all the 63 countries from Dollar Street into country income classes based on their World Bank income classification.[<https://datahelpdesk.worldbank.org/>] All the countries and their income classes are shown in <Ref>. We further categorize the lowest two country income classes as lower-income countries and the highest two income groups as higher-income countries. Topic Representations. There are 291 unique topics that reflect common household objects and human actions (e.g., “toilet paper”, “get water”), some of which are subjective (e.g., “next big thing I plan to buy”, “favorite sports clubs”, “most loved item”). Following <cit.> and <cit.>, we remove nineteen subjective topics from the dataset. §.§ Prompt Design Default English Topic Prompt. Using the topics, we formulate an English prompt without any modifications (e.g., “This is a photo of cutlery”), as described in <cit.>, to which we refer to as the default English prompt. The performance obtained using these prompts is set as our baseline. Translated Topic Prompt. For our multilingual experiments, we investigate the impact of non-English language prompts on the Dollar Street dataset. We use the term non-English major language to refer to the non-English language that is most widely spoken or most commonly used in a particular country or region. Specifically, we pair each country with their non-English major language (e.g., Spanish for Brazil, French for Cameroon) following the country and language information provided by official sources.[<https://www.cia.gov/the-world-factbook/field/languages/>, <https://www.ncsc.org/__data/assets/pdf_file/0024/17862/languagesbycountries.pdf>, <https://www.dss.gov.au/sites/default/files/files/foi_disclosure_log/12-12-13/language-list.pdf>] We identify 59/63 countries in Dollar Street where one or more non-English major languages are spoken. We also select languages covered by state-of-the-art machine translation and multilingual vision-language models. There are 40 such non-English major languages, and they are listed in <Ref>. Finally, we translate the default English prompts to these 40 languages using the NLLB-200-distilled-600M <cit.>, a state-of-the-art neural machine translation model. If an image prompt is translated into the non-English major language of the image's country of origin, it is referred to as a native translated prompt. Country Suffix Topic Prompt. For our second prompting technique, we include country names as suffixes to the default English prompt (e.g., “This is a photo of cutlery from Cameroon”). We create 63 new prompt templates by adding the country names of each of the 63 countries in Dollar Street. We refer to these prompts as country-suffix prompts. Income Suffix Topic Prompt. We also create prompts by integrating socio-economic attributes (e.g., “poor country”, “rich region”) as suffixes to the default English prompt. For instance, a sample prompt is “This is a photo of cutlery from a rich country”. For more robust results, we use multiple synonyms each for the poor and rich attributes (e.g., “an impoverished country”, “a wealthy region”). We also create prompts using neutral suffixes (e.g., “a country”, “a home”). We refer to these prompts as income-suffix prompts. §.§ State-of-the-art Vision-Language Model For our evaluation, we chose NLLB-CLIP-SigLIP <cit.>, a state-of-the-art multilingual vision-language model, due to its wide reach across many low-resource languages and superior performance among other models.[<https://huggingface.co/visheratin/nllb-clip-large-siglip>] The model consists of an image encoder from the SigLIP model <cit.> and a text encoder from the NLLB model <cit.>. The model supports the 201 languages of the Flores-200 <cit.> and has recorded groundbreaking results on the Crossmodal-3600 dataset <cit.>, especially on low-resource languages. § RESEARCH QUESTIONS We perform several analyses to answer three research questions that uncover and mitigate limitations in the performance of VL models across different countries and socioeconomic groups. RQ1. Do translated prompts improve performance for lower-income images? We calculate the cosine similarities between the NLLB-CLIP-SigLIP image embeddings and the text embeddings of the translated prompts for each image and topic pair and the 40 non-English languages. This process yields 40 image-topic alignment scores for each image across all non-English languages. As a baseline, we compute the alignment scores between the images and the default English prompts. We use the alignment scores to compute the Recall scores for each topic. Specifically, we select the top N images with the highest alignment scores for the given topic, where N represents the number of ground truth images corresponding to that topic. We group and analyze the Recall scores across different countries and image income classes and present our findings below. Native translated prompts perform consistently worse than English prompts on lower-income images from their respective countries. We focus our analysis on images from the two lowest image income groups, i.e., poor and low-middle. We filter out countries with no data for these income groups (e.g., Russia, Turkey) and obtain 39/59 countries and 28/40 non-English languages. We pair up each of the remaining countries with the results from their respective non-English native languages and compare the Recall scores obtained using the native translated prompts with those obtained using the default English prompts. We find that native translated prompts consistently perform worse than default English prompts. This is shown in <Ref>, where we compute the average Recall across all countries for English and the native translated prompts. We also show examples from four countries, one from each of the four continents: Africa, America, Asia, and Europe. For 36/39 countries, the native translated prompts yield worse performance compared to default English prompts. The exceptions are Indonesia, where Recall for the Indonesian native translated prompt is 1.0 higher than Recall for the English prompt, and Pakistan, where Recall for the Urdu native translated prompt is 0.7 higher than Recall for the English prompt. The best-performing non-English language is often not the country's language. We analyze the Recall scores for lower-income images from each of the 28 language prompts used in different countries. We observe that the best-performing language prompts for these countries are often not their own non-English major language. Specifically, for 24/39 countries, the best-performing non-English languages have better results than the default English prompts. However, surprisingly, in most cases, these languages are not spoken in these countries. The results are shown in <Ref>, where for 37/39 countries the language with the highest Recall (highlighted in yellow) is different from the non-English major language of the country (highlighted in cyan). The only exceptions are Indonesia and Pakistan, where the highest-performing languages and the non-English major languages are the same (highlighted in bold red). Translated prompts decrease performance for all image income classes across all countries. We analyze the effect of the 40 non-English languages on all images from Dollar Street, regardless of which country the data was collected from. We include all 59 countries and results from all 40 translated prompts for this analysis. We aggregate Recall across all topics and countries for each set of non-English language prompts and group the scores according to image income classes. We measure the difference between Recall scores with default English prompts and the scores with translated prompts to determine the effect of each language on the four different image income classes. We show in <ref> the average Recall and drops in performance for all the 40 translated prompts for each image income class. The most significant impact is on higher-income classes. Specifically, the rich and up-mid income classes experienced the highest drops in performance compared to other income classes. One potential reason is that images depicting rich and up-mid income classes are overrepresented in AI models and datasets, leading them to be seen as the “standard” representation for these topics. In a similar way, English is the most commonly used language for training AI models and is considered the “standard” language for textual data. Therefore, non-English, translated prompts may signal to the model that a representation different from the “standard” is being requested, leading to unpredictable, inferior results. We show the performance drops for each language in the Appendix <Ref>. RQ2. Does adding country information improve performance for lower-income data? We calculate cosine similarity scores between the NLLB-CLIP-SigLIP image embeddings and the text embeddings of the 63 country suffix prompts. This results in 63 image-topic alignment scores for each image across all country suffixes. Similar to RQ1, as a baseline, we use the alignment scores between the images and the default English prompts. Following the same procedure described in Section rq1, we calculate the Recall scores for each topic using each of the 63 country suffix prompts. We analyze the effects of adding country suffixes to text prompts and present our findings below. Country-suffix prompts perform consistently better than default English prompts on lower-income images. Given our focus for this analysis on low-income data, we filter out the 21 countries with no images from poor or low-mid income class households and focus on the remaining 42 countries. In <Ref>, we show the average Recall scores across all countries using the default English and country-suffix prompts. We also show Recall scores from four sample countries from different continents. From our results, we find that for most countries (38/42), adding the country suffix to the text prompts improves Recall performance for lower-income images compared to default English prompts. Countries where this does not hold are Bolivia, Brazil, Jordan, and the United States. The country-suffix prompt performance across different image income classes is influenced by the country's economic status. While country suffix prompts improve the performance of vision-language models on lower-income household data (poor), they simultaneously reduce performance on higher-income data. We measure the Recall of all data for each country suffix, group them into image income classes (based on household income), and place them into categories based on the World Bank income classification of that country suffix (e.g., Recall of data from poor households using country suffixes of poor countries is 31.2) For each image income class category, we calculate and show in <ref> the average Recall of the country suffixes across the different country income levels and the average change in Recall compared to results obtained using the default English prompts. We show an extended table with results for all 63 country suffixes in the Appendix <ref> and <ref>. The differences in Recall between the country suffixes and default English prompts indicate which income group's data benefits from an increase and vice versa. We find that country suffixes from countries that belong to the poor, low-mid, and up-mid World Bank Income categories lead to an increase in Recall for images with poor household income, as shown in green in <ref>. On the other hand, adding country suffixes to prompts leads to a decrease in Recall for images from the other household income groups (low-mid, up-mid, and rich). Interestingly, we observe that country suffixes tend to favor image retrieval from income groups that are the same or close to their World Bank income classification. The default English prompts favor the retrieval of up-mid income level data as the Recall for up-mid images is higher than the Recall for the other 3 income groups. We highlight in bold the average Recall that is highest among other income groups for each country suffix classification in <ref>, and show that poor country suffixes, low-mid country suffixes, and up-mid country suffixes produce the highest Recall scores for data from their corresponding income class. Rich country suffixes are the exception as they produce the highest Recall for data from the up-mid income level as opposed to the rich. We analyze the individual country suffix results and group lower-income images as images belonging to either the poor or lower-middle income group and higher-income images as images belonging to the upper-middle or rich income groups. We then find that 48/63 countries produce the highest Recall for lower-income data and are lower-income countries, or they have the highest Recall for higher-income data and are higher-income countries. The best-performing country suffixes for lower-income data from a continent are from the same continent. We analyze Recall results for lower-income data (from poor and low-mid households) from 42 countries when prompted using each of the 42 country suffixes and group these countries according to their respective continents. We categorize the data further into data from different World Bank Income classes within a continent. Then we display the average Recall and Recall difference (with respect to Recall using default English prompts) of each data group when prompted using country suffixes from different continents. For example, we show that lower-income data from poor African countries have a Recall of 36.6 and a performance increase of 15.7 when prompted using African country suffixes. We find that the best-performing country suffixes for images from a continent are from the same continent, as shown by the diagonal of bold values in <Ref>. We also see that lower-income data from African countries benefit most from the addition of country suffixes to prompts, while data from America and Asia do not experience Recall improvements. Meanwhile, for higher-income data, there are no Recall improvements even when images and country suffixes belong to the same continent (see Appendix <ref>). RQ3. Does adding income information improve performance for lower-income groups? We create three categories of income suffixes, poor, rich, and neutral, as described in <Ref>. We repeat the image retrieval experiments from previous research questions to determine the Recall for images from each topic. We group and analyze the these results across countries and income groups. Poor income suffixes yield the best performance on most lower-income images. We find that the poor income suffix prompt obtains the best performance in 26/42 countries with lower-income images. For 12/42 countries, the default English prompts perform better than all the income suffixes. However, most countries experience Recall improvement from one of the income suffixes. We show the results in <Ref>. The average Recall score is aggregated across all 42 countries for the default English and income suffix prompts. The income suffix poor achieves the best recall. Recall scores for four countries is in Appendix <Ref>. Images from the poor income group benefit the most from income suffixes. We group the data into four income groups and further categorize them according to the World Bank income classification of the countries they were obtained. In <Ref>, we show the Recall scores and performance increase with respect to the default English prompts for each data group. We find that income suffixes benefit most data from poor households and some from lower-income families, while data from other income groups do not see an increase in Recall. In addition, there is no clear trend between the World Bank's income classification of the country and the effect of income suffixes on Recall for the images. Further details about the income classification by World Bank for each country can be seen in <Ref>. Finally, an interesting finding is that for higher-income images, i.e., up-mid and rich, all income suffixes, including rich and neutral suffixes, lead to drops in Recall. This means that the default English prompts obtain the best results for higher-income images. This could be due to the high representation of higher-income images in AI models and datasets as the “standard representation”. Therefore, including additional information, such as socioeconomic status, causes the model to favor the retrieval of lower-income images over higher-income images. This can be seen in the results that achieve Recall improvements on lower-income images while reducing Recall for higher-income images. Such effects could result from these prompts shifting the VL's perspective away from its understanding of the default representation of a topic. § LESSONS LEARNED Current multilingual VL models do not contribute significant improvements to diversity and representation. Our results from Section rq1 demonstrate that English prompts obtain better performance on lower (and higher) income data compared to prompts translated to a non-English that is widely spoken in the region where the data was collected from. Since the quality of translations, quantity of training data available for these languages, and consequently, the performance of AI models in these languages is lower than that of English, these findings are not very surprising. We can look forward to better non-English language performance as multilingual VL models improve. The addition of extra information to prompts shifts the focus of model inferences from higher-income images. We find that adding geographical and socioeconomic attributes (including neutral attributes) to prompts leads to an increased model preference for lower-income images over higher-income images as demonstrated in Section rq2. Images with less standard topic appearances are retrieved using income suffix and country suffix prompts. Inspection of the retrieved images reveals that images with topic appearances commonly found in lower-income households previously not retrieved by the default English prompts are being retrieved with these prompts as shown in <Ref>. For example, pit latrines and forest-style toilets previously left out by the default English prompts are retrieved using country suffixes (Burundi and Cameroon) and the poor income suffixes. Another example is “leaves” as “toilet paper” retrieved by Liberia and Cameroon country suffixes but excluded by the default English prompt. § CONCLUSION In this paper, we addressed the uneven performance of VL models across different countries and different income levels. We explored three attribute-integrated prompting strategies: (1) translation of text prompts to native non-English languages, (2) addition of geographic information, and (3) addition of socioeconomic attributes. We found that integrating geographical and socioeconomic information into prompts enhances the performance on data from lower-income households and retrieves more diverse label representations. Furthermore, we identified and highlighted the contexts where the proposed prompting techniques work best and shared our insights to improve representation in vision-language models and datasets. Our code can be used to evaluate the performance of other VL models and datasets and is publicly available at https://github.com/Anniejoan/Uplifting-Lower-income-dataAnalysis for Uplifting lower-income data. § LIMITATIONS Translation Quality We note that, while NLLB-200-distilled-600M is reputed as a SOTA machine translation model, it does not have perfect accuracy on machine translation across all the languages it supports. We acknowledge that the quality of translations obtained from NLLB-200-distilled-600M greatly impacts our results. Data Coverage Our study is constrained by the reach of the Dollar Street dataset and the number of contributions obtained from each region. therefore we do not account for data from other regions not included in the dataset. Choice of Attributes We acknowledge that other attributes (e.g., physical attributes like color and material) of the objects in the images could be integrated into prompts to improve performance. However, we choose to focus on geographic and socioeconomic attributes since they are broad enough to include all possible topic appearances related to that attribute and their impact on data belonging to different countries and income groups can measured directly. Diverse Data Availability While our methods facilitate the improvement of diversity during dataset annotation, these strategies cannot overcome the representation issues within the actual pool of images available for annotation. § ETHICS STATEMENT Through this work, we aim to contribute toward improving diversity in AI models and even out the disparate impact of these models on the public, especially on underrepresented groups. The strategies discussed in our work can be used to prioritize the retrieval of lower-income images for balancing skewed data representation or domain-specific applications in AI. However, we do not encourage the use of these strategies to promote over-representation or inclusion of one group over the other in contexts that affect all members of the general public. Our decision to use the NLLB-SigLIP model exemplifies our commitment to using inclusive models that benefit as many people as possible, especially underrepresented groups. While researching technologically advanced communities is easier and less resource-intensive, we stress the importance of making AI design decisions that do not exclude communities with limited access to technology. § ACKNOWLEDGEMENTS We are grateful to the Language and Information Technologies (LIT) lab members at the University of Michigan for the insightful discussions and feedback during the early stages of the project. This project was partially funded by a grant from the Department of State (#STC10023GR0014). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Department of State. § EXAMPLE APPENDIX §.§ Non-English Languages We use the following non-English languages in our experiments. 'German', 'Spanish', 'Portuguese', 'French', 'Chinese', 'Czech', 'Danish', 'Arabic', 'Hindi', 'Indonesian', 'Farsi-Persian', 'Italian', 'Russian', 'Mongolian', 'Burmese', 'Dutch', 'Urdu', 'Romanian', 'Serbian', 'Korean', 'Swedish', 'Thai', 'Turkish', 'Ukrainian', 'Vietnamese', 'Bengali', 'Khmer', 'Oromo', 'Ewe', 'Creole', 'Swahili', 'Nepali', 'Hausa', 'Kyrgyz', 'Tagalog', 'Kinyarwanda', 'Somali', 'Zulu', 'Sinhala', 'Shona' §.§ Lists of countries and classifications §.§ Recall from all Prompting techniques across all countries §.§ Non-English Languages and Effect on Recall §.§ How do different country suffixes prompts affect the Recall of images from different income groups (expanded)? §.§ Does adding income suffixes improve performance for lower-income images? §.§ The best-performing country suffixes for higher income data from a continent are from the same continent This is a section in the appendix.
http://arxiv.org/abs/2407.02005v1
20240702072257
An End-to-End Speech Summarization Using Large Language Model
[ "Hengchao Shang", "Zongyao Li", "Jiaxin Guo", "Shaojun Li", "Zhiqiang Rao", "Yuanchang Luo", "Daimeng Wei", "Hao Yang" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
ScaleDreamer: Scalable Text-to-3D Synthesis with Asynchronous Score Distillation Zhiyuan Ma1,2 Yuxiang Wei1,5 Yabin Zhang1 Xiangyu Zhu3,4 Zhen Lei1,2,3,4Corresponding authors. Lei Zhang1 Received 2024; accepted 2024 ================================================================================================================= *These authors contributed equally to this workfootnote § ABSTRACT Abstractive Speech Summarization (SSum) aims to generate human-like text summaries from spoken content. It encounters difficulties in handling long speech input and capturing the intricate cross-modal mapping between long speech inputs and short text summaries. Research on large language models (LLMs) and multimodal information fusion has provided new insights for addressing these challenges. In this paper, we propose an end-to-end SSum model that utilizes Q-Former as a connector for the audio-text modality and employs LLMs to generate text summaries directly from speech features. We adopt a multi-stage training approach that includes LLM based ASR and Text Summarization (TSum) tasks as auxiliary tasks. ASR tasks are used to align feature spaces and enhance the LLM's ability to handle longer speech. Then, we utilize a curriculum learning strategy to facilitate the model's transition from TSum to SSum. Finally, our model achieves competitive performance on the How-2 dataset. § INTRODUCTION Abstractive Speech Summarization (SSum) <cit.> aims to directly generate human-friendly textual summaries from relatively long speech inputs. Compared to Text Summarization task <cit.>, its core challenges are: (a) the long speech sequences pose a computational complexity bottleneck; (b) the non-monotonic and complex mapping between long speech inputs and short text summaries; (c) the modality gap between audio inputs and text outputs. Previous methods can be categorized into two types: cascaded models <cit.> of Automatic Speech Recognition (ASR) and Text Summarization (TSum), or end-to-end models <cit.>. Recent research has shown that end-to-end models can outperform cascaded systems <cit.>, as they can extract para-linguistic information from speech and address the error propagation issue in cascaded systems. However, in order to encode long audio directly, end-to-end models often need to truncate audio, or utilize restricted attention <cit.> or alternatives like F-Net <cit.>, which limit further model improvements. Recently, the rapid progress of large language models <cit.> has drawn interest from multiple research areas due to their capacity for handling extremely long inputs and excellent performance in NLP tasks like question answering, reasoning, and summarization. Speech processing is adopting the latest advancements from LLMs, including tasks such as ASR <cit.>, GPT-style speech language models <cit.>, and a range of other applications <cit.>, all leveraging the benefits of using LLMs in this field. To integrate speech features into LLMs, a connector is typically required, where Querying Transformer (Q-Former) <cit.> has been proven to be a relatively efficient cross-modal information extraction method <cit.>. It can convert variable-length input sequences into fixed-length output query representations. We believe that by integrating Q-Former for cross-modal encoding between speech and text, and leveraging LLMs to manage tasks like processing long input speech and creating concise summaries, we can further improve the model's performance in end-to-end speech summarization. As traditional transformer-based speech encoders find it challenging to handle longer speech, it is intuitive to segment the speech for encoding and then connect the feature segments to build the final representation. In this paper, we attempt to integrate long speech inputs into LLMs using segment level Q-Former and train a LLM based end-to-end speech summarization model through efficient parameter fine-tuning method. In detail, we utilize a speech encoder and Q-Former to extract speech features for individual segments of long speech. Then, we combine the speech features from all segments and feed them into LLM. Finally, LLM employs these speech features as prompts to generate the ultimate text summaries in an autoregressive manner. The proposed model's overview can be found in Figure <ref>. However, our model still faces the following challenges: * The output of Q-Former needs to be aligned with the input of LLM so that LLM can recognize the speech features. * The speech segmentation strategy may hamper the model's capability to handle the context of long speech, as there is no interaction between segments during encoding. * Compared to text summarization tasks, speech summarization still faces the modality gap between speech and text. To tackle these challenges, our model initial aligns the Q-Former output with the LLM input effectively via a sentence-level ASR task. Then, we improve the model's ability to handle longer speech by incorporating a Document-level ASR task. Finally, to further bridge the gap between modalities, we conduct joint training on two tasks, TSum and SSum using a curriculum learning approach <cit.>. We validated our proposed model on the widely used How2 <cit.> dataset. Our experiments demonstrate that our multi-stage training strategy effectively prepares LLMs for end-to-end speech summarization tasks by leveraging ASR and TSum tasks. The final model's performance exceeds that of the cascaded models and is comparable to the strong baselines of traditional end-to-end models based on the BERTScore metric. § RELATED WORK Speech Summarization <cit.> can be tackled using either cascaded or end-to-end methods, with each approach having its own strengths and weaknesses based on the particular application scenario. Initially, cascaded systems <cit.>, leveraging pre-trained ASR and TSum models, can be individually enhanced with domain-specific <cit.> data before cascading them together to generate the final text summary. Studies have proven that cascaded systems can achieve competitive performance but also face challenges such as error propagation, longer inference delays, and the inability to fully utilize audio information. On the other hand, end-to-end systems <cit.> can abstract text summaries from speech features and have been shown to outperform cascaded models on some datasets. However, when dealing with long speech recordings, input truncation or non-standard self-attention modules are essential <cit.>. Additionally, <cit.> explored using large models to construct more enriched summary labels to enhance model performance. As far as we know, there is currently no direct effort to convert LLMs into end-to-end Speech Summarization models. § METHODOLOGIES In Figure <ref>, we present an overview of the proposed model, which comprises three main components: a speech encoder, a Q-Former module, and a LLM. The model training is divided into three stages to allow the model to bridge the modality gap and achieve better performance. §.§ Speech feature extraction Initially, we need a speech encoder denote as S-Encoder, which can be pretrained or trained from scratch, for extracting speech features from the raw waveform. For clarity, let's define some key notations: X ∈ R^n_x × d_x represents the speech features extracted from the S-Encoder, where n and d are the numbers of vectors and hidden dimensions respectively. Q-Former is responsible for further compressing X into a fixed-length representation Q ∈ R^n_q × d_q, serving as the final input feature for the LLM. Notably, we included a weighted sum module in the model to help Q-Former extract a wide range of speech features, enabling the model to leverage useful signals in the speech aside from text. For longer audio inputs, we segment the speech into segments and introduce segment-level position embedding E_pos to X so that Q-Former can learn the positional information of different segments. So the the final speech feature F_speech can be calculated as follows: F_speech = [Q-Former(S-Encoder(x_i) ⊕ E_pos)]_i=1^N §.§ LLM for end-to-end Speech Summarization We choose LLaMA2-7B <cit.> as the base LLM and employ the parameter-efficient Low-rank Adaptation (LoRA) <cit.> to fine-tune the model, while keeping other LLM parameters frozen. The speech features F_speech are used as prompt tokens to guide the model to generate text summaries T_sum directly in an autoregressive manner. The transcript text (T_trans) corresponding to the speech is utilized as auxiliary information during the training process. Notably, we introduce embeddings (E_audio and E_text) to differentiate the modality information of different input features, thereby helping LLM bridge the modality gap. Therefore, the final loss we optimize is as follows: ℒ_LM = -∑_i=1^T_sumlog P(y_i|y_<i, F_speech⊕ E_audio;θ_LoRA) §.§ Training Strategy The training is divided into three stages, and the schematic diagrams of the model inputs and outputs for each stage are shown in Figure <ref>: * First, we train a sentence-level ASR model where Q-Former extracts speech feature tokens and LLM uses them as prompts to generate corresponding transcriptions. Each segment of the speech is optimized separately without interaction between segments and without the need for segment positional embeddings. * Next, we then flatten the speech features (F_speech) and transcription features (T_trans) from various segments of a long speech recording to train a document-level ASR task. This approach promotes contextual connections among segments, improving the model's capability to understand extended speech contexts. Additionally, randomly masking speech or transcription features within a segment helps align speech and text representations. * Finally, we optimize the ultimate end-to-end Speech Summarization task. Training directly on SSum still faces the modal gap issue compared to the TSum task. Therefore, we employ the concept of curriculum learning (CL) to gradually transition the model from the TSum task to the SSum task. At the beginning, the model utilizes all speech and text features to complete the summarization task, and the model's input aligns with Stage 2. Subsequently, we progressively remove the transcribe text features until only speech features remain. § EXPERIMENTAL SETUP In this section, we will discuss the details of our experiments, including the dataset, model configurations, evaluation metrics, and so on. §.§ Dataset The How-2 Dataset, as outlined in <cit.>, contains 2,000 hours of instructional videos accompanied by text transcripts, video content, speech, translations, and summaries. Abstractive summaries are generated based on speech for an end-to-end approach. Table <ref> presents the statistics for the training and testing partitions of the How2 dataset. The model features and reference summaries can be found here [https://github.com/srvk/how2-dataset]. At the same time, we merged the original speech segments in the dataset and kept the length of each individual speech segment to around 30 seconds to enhance encoding efficiency. §.§ Model and Training configurations The core components of our model are as follows: Speech Encoder: We begin by training a standard ASR model using an attention-based sequence model, comprising a 12-layer conformer encoder and a 6-layer transformer decoder. The training loss is a hybrid CTC/Attention, with a CTC weight of 0.3. The model utilizes hidden and feedforward dimensions of 768 and 3072, respectively. We use the encoder of the ASR model as the speech encoder and keep it frozen during subsequent training. Q-Former: Our Q-Former module inherits the settings from <cit.>, starting with a pretrained BERT_base <cit.> and keep updating during training. There are 150 trainable queries for each speech segment. Then, we concatenate the outputs of Q-Former to align with the input feature dimensions of LLM. Finally, for approximately 30 seconds of speech, the number of speech feature tokens is also 30. LoRA adapter for LLM: We use the LoRA approach to adapt the key, query, value and output layers of the self-attention mechanism leaving other part of LLaMA2-7B model unchanged . Unless specified otherwise, default LoRA hyperparameters are set to a rank of R = 8 and a = 16. Baseline systems: We compare two baseline systems: one uses ground truth (GT) transcripts, while the other incorporates ASR transcripts along with LLM for summarization generation. During training, the Huggingface transformers library [https://huggingface.co/docs/transformers/en/index] and 8 GPUs are used in all of our experiments. When training the Speech encoder, adam optimizer is used with a peak learning rate of 0.002 in 100k training steps and the batch size is 128. For the training of end-to-end models, we still use Adam optimizer with a learning rate of 2e-4, warmup steps of 8k, and a total training step of around 100K. Additionally, an early stop strategy is employed to prevent overfitting. For different stages of training, we adjust the parameters for gradient accumulation to maintain a batch size of 128. In the second training stage, we set the random masking probability to 0.2. When training in stage 3 with curriculum learning, we dedicate 20% of the training steps to jointly optimizing TSum and SSum, 50% to curriculum learning, and the final 30% to training the SSum task exclusively. §.§ Metrics We evaluate our models with ROUGE <cit.>, METEOR <cit.>, and BERTScore <cit.>, which are the most common automatic metrics for evaluating summarization models. § RESULT AND ANALYSIS §.§ Main Result Table 2 summarizes some of our experimental results, including baseline models for cascade method and typical end-to-end models from previous works. Our end-to-end model exceeds the baseline cascade system using ASR transcripts and LLaMA2-7B in various evaluation metrics, and even on par with systems using ground truth transcripts and LLaMA2-7B in the BERTScore metric. This demonstrates that our model has successfully mitigated the error propagation effects caused by ASR systems, and has successfully bridged the modality gap. When compared to some highly optimized strong end-to-end models in the past (utilizing TTS data augmentation, text summarization data), although our model shows a certain gap in ROUGE and METEOR metrics, it can essentially match them in terms of BERTScore. This also demonstrates the advantage of LLM in high-level semantic summarization capability. Relevant data augmentation and further optimization work are left for future exploration. The ablation experiments also demonstrate that the training in Stage 2 and the curriculum learning with the TSum task used in Stage 3 contribute to the final results of the model, with the latter being the more crucial factor. In the end, we attempted to use a larger LLaMA2-13B model to improve the summarization performance. However, we only observed an improvement in the ROUGE-1,2 metric, while the other metrics remained consistent with the LLaMA2-7B model. This may indicate that the 7B model is already sufficient to address the current task, or that larger models may have other training-related problems, which will also be explored in the future. §.§ Analysis To further analyze our models, we conducted the following additional experiments: Alignment for speech features: In training stage 1, we attempted to align the output of Q-Former with the input of LLM through an ASR task, allowing us to obtain an LLM based ASR model (LLM-ASR). The performance of this model can to some extent measure the effectiveness of feature alignment. We summarized some WER results of different models in Table <ref>. The Base-ASR comes from our own trained basic ASR model, while Baseline <cit.> is from previous work. The results indicate that we have obtained a competitive ASR model, with a decrease of 0.3 compared to the base ASR, showing that the features extracted by Q-Former can be recognized by the LLM. Long speech context learning: In training stage 2, we attempted to leverage document-level ASR tasks to enhance the modeling capability of long speech recordings, enabling us to obtain an LLM based document level ASR model (LLM-DOC-ASR). If we successfully achieve our goal, the recognition capability of long speech recordings by the model will also be improved. To validate our hypothesis, we compared the WER of the two ASR models obtained from training stage 1 and training stage 2, as well as the Perplexity (PPL) of the transcribed text at the document level. The results in Table <ref> show that both WER and PPL have been optimized, proving that LLM can more effectively handle long speech inputs after training in Stage 2. Weight distribution for tasks: In order to explore how our model utilizes different levels of speech features in various training tasks, we analyzed the weight distribution in the weighted sum module for the speech encoder. The weight distribution is plotted in Figure <ref>. Overall, the model tends to use high-level features more, whether it is for the early ASR task or the later Summarization task. This indicates that modality conversion tasks cannot benefit from low-level features. Nevertheless, we can still observe that, compared to the ASR task, the SSum task prefers higher-level abstract semantic information, with relatively higher weights for the last three layers. § CONCLUSION In this work, we attempt to combine the cross-modal feature extractor Q-Former with the LLMs to solve the end-to-end speech summarization task. To achieve our goal, we segment long speech and extract speech features using Q-Former, then guide LLMs to generate summaries directly. Afterwards, we use ASR and the TSum task as auxiliary tasks and divide the training into multiple stages to overcome challenges faced by the model such as feature space alignment, understanding long speech, and cross-modal mapping. Finally, we validate our model on the how2 dataset. IEEEtran
http://arxiv.org/abs/2407.01979v1
20240702063113
Unveiling Global Interactive Patterns across Graphs: Towards Interpretable Graph Neural Networks
[ "Yuwen Wang", "Shunyu Liu", "Tongya Zheng", "Kaixuan Chen", "Mingli Song" ]
cs.LG
[ "cs.LG", "cs.AI" ]
School of Software Technology, Zhejiang University State Key Laboratory of Blockchain and Security, Zhejiang University Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Hangzhou China yuwenwang@zju.edu.cn Shunyu Liu is the corresponding Author. State Key Laboratory of Blockchain and Security, Zhejiang University Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Hangzhou China liushunyu@zju.edu.cn Big Graph Center, School of Computer and Computing Science, Hangzhou City University College of Computer Science and Technology, Zhejiang University Hangzhou China tyzheng@zju.edu.cn State Key Laboratory of Blockchain and Security, Zhejiang University Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Hangzhou China chenkx, brooksong@zju.edu.cn § ABSTRACT Graph Neural Networks (GNNs) have emerged as a prominent framework for graph mining, leading to significant advances across various domains. Stemmed from the node-wise representations of GNNs, existing explanation studies have embraced the subgraph-specific viewpoint that attributes the decision results to the salient features and local structures of nodes. However, graph-level tasks necessitate long-range dependencies and global interactions for advanced GNNs, deviating significantly from subgraph-specific explanations. To bridge this gap, this paper proposes a novel intrinsically interpretable scheme for graph classification, termed as Global Interactive Pattern (GIP) learning, which introduces learnable global interactive patterns to explicitly interpret decisions. GIP first tackles the complexity of interpretation by clustering numerous nodes using a constrained graph clustering module. Then, it matches the coarsened global interactive instance with a batch of self-interpretable graph prototypes, thereby facilitating a transparent graph-level reasoning process. Extensive experiments conducted on both synthetic and real-world benchmarks demonstrate that the proposed GIP yields significantly superior interpretability and competitive performance to the state-of-the-art counterparts. Our code will be made publicly available[ The code is available at <https://github.com/Wangyuwen0627/GIP-Framework.git>.]. <ccs2012> <concept> <concept_id>10002951.10003227.10003351</concept_id> <concept_desc>Information systems Data mining</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010319</concept_id> <concept_desc>Computing methodologies Learning latent representations</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Data mining [500]Computing methodologies Learning latent representations Unveiling Global Interactive Patterns across Graphs: Towards Interpretable Graph Neural Networks Mingli Song June 2024 ================================================================================================ § INTRODUCTION Graphs, serving as data structures capable of naturally modeling intricate relationships between entities, have pervasive applications in real-world scenarios, such as transportation networks <cit.>, social networks <cit.>, power system <cit.>, and biological molecules <cit.>. In recent years, to effectively uncover potential information in graphs for applications, graph neural networks (GNNs) <cit.> have emerged as a prominent paradigm and made remarkable achievements. Following a message-passing mechanism, GNNs aggregate the information from the local neighbors of each node to obtain node-wise representations, bolstering the development in various downstream tasks including node classification <cit.> and graph classification <cit.>. Despite the remarkable effectiveness of GNNs, their lack of explainability hinders human trust and thus limits their application in safety-critical domains. To mitigate this issue, recent efforts have explored identifying informative subgraphs that serve as either post-hoc or intrinsic explanations for the decisions made by GNNs. Specifically, a line of post-hoc studies <cit.> work on a pre-trained model and propose different combinatorial search methods for identifying the most influential subgraphs based on model predictions. However, since these methods train another explanatory model to provide explanations, they may be disloyal to the original model, resulting in distorted attribution analysis. In contrast to the post-hoc methods, the intrinsically interpretable ones endeavour to identify subgraphs during training and make reliable predictions guided by these subgraphs <cit.>. The pioneering works, e.g. GIB <cit.> and GSAT <cit.>, adopt the information bottleneck principle <cit.> to constraint the information flow from input graph to prediction, ensuring the label-relevant graph components will be kept while the label-irrelevant ones are reduced. Additionally, ProtGNN <cit.> learns representative subgraphs (i.e., prototypes) from inputs by prototype learning <cit.> and makes predictions based on the similarity between new instances and prototypes. Unfortunately, the explanation graph is generated by an extra projection process based on the prototype embedding, which can introduce explanatory biases. Graph-level tasks often necessitate global-level explanations to depict long-range dependencies and global interactions considering the whole graph <cit.>. For example, in the case of protein molecules, enzymes are distinguished from other non-enzyme proteins by having fewer helices, more and longer loops, and tighter packing between secondary structures <cit.>. Identifying such global structural patterns often requires the collective participation of dozens or even hundreds of amino acids. It is time-consuming to entail expert examination over the subgraph explanations of each node provided by previous subgraph-specific methods. Beyond the node-wise representations of early GNNs, recent state-of-the-art GNNs <cit.> have shifted the focus towards considering global interactions for graph-level tasks, enhancing the expressive power of GNNs by a large margin. Hence, there exists a significant gap between local subgraph-specific explanations and global-level explanations, which are required by both graph-level tasks and advanced GNNs. In this paper, we propose the Global Interactive Pattern (GIP) learning, a new interpretable graph classification task that approaches the problem from a global perspective. This task poses two key challenges for existing techniques, namely, high computational complexity and diverse global patterns. Firstly, the presence of a large number of nodes, along with their intricate connectivity, presents a significant challenge in modeling long-range dependencies and accurately extracting global interactions. Simply extending subgraph-specific methods to identify global interactive patterns would result in exponentially increasing computational complexity. This is particularly true in real-world graphs where these patterns typically involve dozens or even hundreds of nodes. Secondly, there exist multiple interactive patterns for graphs belonging to the same class. Existing techniques either provide instance-level explanations or entail high costs for extracting graph patterns. Hence, it becomes crucial to identify representative and diverse patterns within an acceptable computational overhead for more comprehensive and accurate explanations. To tackle these challenges, we explore an innovative framework for sloving GIP, by first performing compression of the graph and then identifying inter-cluster interactions in the coarsened graph instances, which we call interactive patterns, to determine the intrinsic explanations. Specifically, the framework consists of two key modules: clustering assignment module and interactive pattern matching module. First, in the clustering assignment module, we iteratively aggregate components with similar features or tight connections to form a cluster-level representation, and then extract global structure information based on the interactions between local structures, thus realizing the modeling of the global interactions while aggregating the information of local substructures. Then, in the interactive pattern matching module, different from prior researches <cit.> in graph pattern recognition that target at learning representative embeddings in hidden space, we define learnable interactive patterns in the form of graph structure to directly reveal the vital patterns in the graph level. Additionally, we introduce graph kernels as a measure of similarity between the coarsened graph and the interactive patterns, thereby propelling the learning and matching of interactive patterns based on the similarity. Finally, with the similarity scores, a fully connected layer with softmax is applied to compute the output probabilities for each class. In summary, the main contributions of our work are as follows: * We explore a novel interpretable graph classification task termed as Global Interactive Pattern (GIP) learning, taking a step further from local subgraph explanation to global interactive patterns. * We propose a holistic framework for solving GIP, which achieves a double-win of high computational efficiency and accurate pattern discovery. By integrating learnable cluster constraints and graph prototypes, we can adaptively provide the decisions with reliable graph-level explanations. * Extensive experiments on both real-world and synthetic datasets demonstrate the effectiveness of our framework in achieving accurate prediction and valid explanation. In addition, visualization of the explanations further demonstrates the superior capability of our framework in identifying global interactive patterns. § RELATED WORK §.§ Graph Neural Networks Driven by the momentous success of deep learning, recently, a mass of efforts have been devoted to developing deep neural networks for graph-structured data <cit.>. As one of the pioneer works, graph neural networks (GNNs) <cit.> have demonstrated effectiveness in various real-world scenarios <cit.> such as traffic analysis <cit.>, drug generation <cit.> and recommendation systems <cit.>. Generally, classic GNN variants adopt the message-passing mechanism <cit.> to update the embeddings of each node based on the calculated message set between the node and each of its neighbors. Then, these node-wise representations are manipulated through concatenation or pooling operations to form graph-level representations for graph-level tasks. Although this unique message-passing mechanism enables GNNs to fully leverage the relationships between nodes in graph structure, such GNNs may suffer from over-smoothing due to repeated local aggregation and over-squashing due to the exponential growth of computational cost with increasing model depth. Recent years have witnessed many successful architectures that shift the focus towards considering global interactions for graph-level tasks. These approaches <cit.> model long-range dependencies and global structures to facilitate a more comprehensive acquisition of the global information in graphs, thus enhancing the expressive power of the model. Owing to the powerful representation capability, these GNNs have achieved state-of-the-art performance. §.§ Explainability of Graph Neural Networks Despite the great success of GNNs, their black-box nature undermines human trust, thereby hindering their application in high-stake domains. To bolster understanding of GNNs and provide more credible evidence for decision-making, plenty of researches focus on the explainability of GNNs is emerging. Such studies concentrate on identifying vital subgraphs, offering intrinsic or post-hoc explanations for GNNs. The post-hoc explainable methods focus on designing different combinatorial search method to explore important subgraphs based on model outputs <cit.>. As an initial endeavour, GNNExplainer <cit.> learns soft masks from edge and node features to identify pivotal subgraphs for explaining the prediction result. Furthermore, PGExplainer <cit.> employs a reparameterization trick to obtain approximated discrete masks instead of soft masks. In addition, XGNN <cit.> generates representative subgraphs for different classes as model-level explanations. Since these methods focus on providing post-hoc explanations for a trained GNN, they might fail to fit the original model precisely and generate biased explanation. Though it would be preferable to design interpretable GNNs <cit.>, there are still limited efforts in this regard <cit.>. The goal of these methods is to identify subgraphs during training and make reliable predictions guided by subgraphs <cit.>. GIB <cit.> and GSAT <cit.> adopt the information bottleneck principle <cit.> to constraint the information flow from the input graph to the prediction, ensuring the label-relevant components will be kept while the label-irrelevant ones are reduced. In addition, some existing works attempt to apply prototype learning for exploring important subgraphs from instances and make predictions based on the similarity between new instances and prototypes <cit.>. For example, ProtGNN <cit.> applies the Monte Carlo tree search <cit.> to identify subgraphs in the original graphs as prototypes, while PxGNN <cit.> obtains prototypes from learnable prototype embeddings by a pre-trained prototype generator. However, the aforementioned methods only provide one-side attribution analysis from a localized viewpoint, which may lead to under-representative explanations when higher-order node interactions or global graph structure play a pivotal role. To address this issue, in this paper, we propose an interpretable scheme for graph classification called GIP, that explicitly extracts global interactive patterns to deliver graph-level explanations. § METHOD In this section, we elaborate the details of the proposed framework for GIP. First, in the clustering assignment module, we extract inter-cluster interactions from coarsened graph as global structural information. Then, in the interactive pattern matching module, we match the coarsened graph with a batch of learnable interactive patterns based on the similarity calculated by the graph kernel. Finally, with the similarity scores, the fully connected layer with softmax computes the probability distributions for each class. The architecture of the proposed framework is shown in Figure <ref>. §.§ Preliminaries §.§.§ Notations We denote an attributed graph with N nodes by G=(𝐕, 𝐗, 𝐀), where 𝐕={v_1,...,v_N} is the set of nodes in graph, 𝐗∈ℝ^N × d is the matrix consisting of the d-dimensional feature vector of each node, 𝐀∈{0, 1}^N × N is the adjacency matrix. 𝐀_ij = 1 if nodes v_i and v_j are connected; otherwise 𝐀_ij = 0. In this paper, we take graph classification as the target task. Given a set of M graphs 𝒢 = {G_1, G_2,..., G_M}, and each graph G_m is associated with a ground-truth class label y_m ∈𝒞, where 𝒞={1,2,...,C} is the set of candidate labels. The graph classification task aims to learn a graph classifier that predicts the estimated label ŷ_m for an input graph G_m. §.§.§ Graph Normalized Cut Graph normalized cut is an effective approach for realizing graph clustering. The goal is to construct a partition of the graph into K sets, such that the sets are sparsely connected to each other while the internal structure of the sets exhibits high cohesion <cit.>. We formalize the objective of the K-way normalized cut as follows: min_𝐕_1,...,𝐕_K1/K∑_k=1^Kcut(𝐕_k, 𝐕_k)/vol(𝐕_k), where 𝐕_k represents the nodes belonging to cluster k, vol(𝐕_k) = ∑_i,j ∈𝐕_k𝐀_ij counts the number of edges within cluster k, and cut(𝐕_k, 𝐕_k) = ∑_i ∈𝐕_k, j ∈𝐕\𝐕_k𝐀_ij counts the edges between the nodes in cluster k and the rest of the graph <cit.>. Let 𝐏∈{0,1}^N × K be the cluster assignment matrix, where K denotes the number of target clusters and 𝐏_ij=1 when node i belongs to cluster j. The objective function of the normalized cut can be further defined according to the derivation in <cit.>: min_𝐏∈{0,1}^N × K1/K∑_k=1^K𝐏_k^T𝐋𝐏_k/𝐏_k^T𝐃𝐏_k = min_𝐏∈{0,1}^N × K1/K·Tr(𝐏^T𝐋𝐏/𝐏^T𝐃𝐏), where 𝐏_k represents the k-th column in 𝐏, 𝐃 is the corresponding degree matrix, and 𝐋=𝐃-𝐀 is the graph Laplacian matrix. The optimization problem is NP-hard because the clustering assignment matrix 𝐏 takes discrete values <cit.>. Therefore, following the traditional approach of solving the probabilistic approximation of the K-way normalized cut <cit.>, we perform a continuous relaxation for 𝐏 such that it satisfies 𝐏_ij∈ [0,1] and ∀ i, ∑_j 𝐏_ij = 1. §.§.§ Random Walk Graph Kernel Random walk graph kernel is a kind of kernel function for graph similarity evaluation, whose core idea is to compute the similarity of two input graphs by counting the number of common paths in the two graphs. R-step random walk means that the length of paths formed by the random walk does not exceed R. To efficiently compute the random walk kernel, we follow the generalized framework of computing walk-based kernel <cit.>, and use the direct product graph for equivalence calculation. Given two graphs G = (𝐕, 𝐗, 𝐀) with N nodes and G^' = (𝐕^', 𝐗^', 𝐀^') with N^' nodes, the direct product graph G_× =(𝐕_×, 𝐗_×, 𝐀_×) is a graph with NN^' nodes, each representing a pair of nodes from G and G^'. The adjacency matrix 𝐀_× is equal to the Kronecker product of the adjacency matrices of G and G^', that is 𝐀_×=𝐀⊗𝐀^' <cit.>. The attribute of node (v,v^') in G_× is calculated based on the attribute of node v in G and node v^' in G^', i.e. 𝐗_×_(v,v^')=𝐗_v𝐗^' T_v^'. Performing a random walk on the direct product graph G_× is equivalent to performing the simultaneous random walks on graphs G and G^'. Therefore, The R-step random walk kernel for attributed graphs <cit.> can be calculated as: K(G, G^') = ∑_r=0^RK_r(G, G^') K_r(G, G^') = ∑_i,j=1^|𝐕_×|𝐗_×_i𝐗_×_j[𝐀_×^r]_ij where 𝐗_×_i denotes the feature of i-th nodes in G_× and the (i, j)-th element of 𝐀_×^r represents the number of common walks of length r between the i-th and j-th node in G_×. §.§ Clustering Assignment Module In this module, the underlying idea of our approach stems from related work on graph pooling <cit.>, which progressively creates coarser versions to represent cluster-level interactions by applying a series of compression blocks to the input graph. In each compression block, we first obtain the embedding vector 𝐙∈ℝ^N × d^' of nodes by encoder, which can be any model, and we apply GCN <cit.> as encoder for implementation. 𝐀 = 𝐃̂^-1/2𝐀̂𝐃̂^-1/2, 𝐙 = f({𝐗, 𝐀}; Θ_GCN), where 𝐀̂ = 𝐀 + 𝐈_N is the adjacency matrix with added self-loop, 𝐃̂ is the degree matrix of 𝐀̂, and Θ_GCN are parameters of the encoder. Then, we divide the original input graph into the cluster-level representation based on the generated node embeddings in a trainable manner. Specifically, we define a trainable cluster assignment matrix 𝐒 to map each node to a corresponding cluster, and each entry 𝐒_ij represents the probability of node i belonging to cluster j. Considering that the similarity of node features can affect clustering assignment to some extent, node feature embedding is incorporated into the learning process of 𝐒. We take 𝐙 as input and use a multi-layer perceptron (MLP) with softmax on the output layer to compute 𝐒: 𝐒 = Softmax (MLP (𝐙; Θ_MLP_1) ), where 𝐒 satisfies 𝐒_ij∈ [0,1] and ∀ i ∑_j 𝐒_ij = 1, Θ_MLP_1 denotes the learnable parameters in the MLP. Unlike the unconstrained learning process in <cit.>, we aim to impose constraints on 𝐒 in order to obtain clustering assignment results that better reflect the clustering characteristics of nodes in the real-world graphs. First, we optimize the learning of 𝐒 by minimizing an unsupervised loss term ℒ_clu, which defined on a relaxation formula that approximates the K-way normalized cut (<ref>): ℒ_clu = 1/K·Tr(𝐒^T𝐋𝐒/𝐒^T𝐃𝐒), where 𝐃 is the corresponding degree matrix, and 𝐋=𝐃-𝐀 is the graph Laplacian matrix. However, without additional constraints on the assignment matrix 𝐒, cluster assignment may fall into a local optimal solution: assigning all nodes to the same cluster. Hence, we introduce an balanced loss term ℒ_bal to encourage more balanced and discrete clusters: ℒ_bal = √(K)/N||∑_i=1^N𝐒_i||_F-1, where ||·||_F indicates the Frobenius norm, N is the number of nodes and K is the number of target clusters. In summary, the optimization objective of this module can be expressed as: ℒ_CA = α_1ℒ_clu + α_2ℒ_bal, where α_1 and α_2 control the ratio of the loss terms. Assuming the input adjacency matrix in the ℓ-th compression block is 𝐀^ℓ-1, the input node embedding matrix is 𝐙^ℓ-1, and the computed clustering assignment matrix is 𝐒^ℓ, we can generate a new coarsened adjacency matrix 𝐀^ℓ and a new embedding matrix 𝐗^ℓ for next compression block. Specifically, we apply the following two equations: 𝐗^ℓ = 𝐒^ℓ^T𝐙^ℓ-1∈ℝ^N^ℓ× d, 𝐀^ℓ = 𝐒^ℓ^T𝐀^ℓ-1𝐒^ℓ∈ℝ^N^ℓ× N^ℓ, where N^ℓ denotes the number of target clusters in ℓ-th block and d denotes dimension of node features. By stacking compression blocks, we can obtain 𝐀^L and 𝐗^L for cluster-level representation CG, where L is the number of compression blocks. Considering the impact of the enormous edges in the coarsened graph, we propose to filter the edges. Specifically, we define the matrix 𝐌𝐚𝐬𝐤∈{0,1}^N^L× N^L to filter the edges in the coarsened graph, where N^L is the number of nodes in the coarsened graph. If 𝐀^L_ij exceeds threshold δ_1, the element at the corresponding position in 𝐌𝐚𝐬𝐤 is set to 1, otherwise it is set to 0: 𝐌𝐚𝐬𝐤_ij= { 1, if 𝐀^L_ij > δ_1; 0, else, . Thus, we obtain the filtered adjacency matrix 𝐀^L^'=𝐀^L⊙𝐌𝐚𝐬𝐤 for cluster-level representation, where ⊙ is the element-wise product. §.§ Interactive Patterns Matching Module In this module, we aim to learn representative inter-cluster structures and interactions for each class, which we call interactive patterns, to give accurate predictions and reliable explanations. First, we define a total of T learnable interactive patterns, i.e. 𝒫={P_1, P_2, ..., P_T}, and allocate them evenly to C classes. In order to provide a more understandable explanation, we define each interactive pattern P_t as a combination of the following two parts: (i) randomly initialized feature matrix 𝐗^P_t with pre-defined size; (ii) the topology 𝐀^P_t generated from the feature matrix, and the generation process of 𝐀^P_t is defined as follows: 𝐀^P_i_ij = σ (MLP ([𝐗^P_t_i;𝐗^P_t_j];Θ_MLP_2)) where σ(·) is the Sigmoid function, Θ_MLP_2 is trainable parameters of MLP, [·;·] is concatenation operation, 𝐗^P_t_i and 𝐗^P_t_j are features of nodes in interactive pattern. Therefore, the generated interactive patterns can be directly used for explanation without the need for additional graph projection or graph generation processes <cit.>. Then, for the coarsened graph CG and interactive pattern P_t, we propose to calculate their similarity through graph kernels <cit.>. The choice of graph kernels can be changed according to the actual application scenario. Here, we choose the R-step random walk graph kernel <cit.> which compares random walks up to length R in two graphs. Then, the similarity between the coarsened graph CG and the interactive pattern P_t can be expressed as: sim(CG, P_t) = K(CG, P_t), where K(CG, P_t) is calculated by equations (<ref>) and (<ref>). Considering the desired representativeness of the interactive patterns for their corresponding classes, we suppose that the learning objective of interactive patterns is to encourage each coarsened graph to approach the interactive patterns belonging to the same class, while moving away from the interactive patterns belonging to other classes. To achieve this, we introduce the multi-similarity loss <cit.> to constrain learning of patterns: ℒ_mul=1/M∑_m=1^M (1/γ_1log(1+∑_P_i ∈𝐏𝐨𝐬_me^γ_1(d_mi-λ)) +1/γ_2log(1+∑_P_i ∈𝐍𝐞𝐠_me^-γ_2(d_mi-λ))) where 𝐏𝐨𝐬_m denotes the set of interactive patterns belonging to the same class as the coarsened graph CG_m, 𝐍𝐞𝐠_m denotes the set of interactive patterns apart from these, d_mi denotes the distance between coarsened graph CG_m and interactive pattern P_i, γ_1 and γ_2 control the contributions of different items, and λ represents the margin which controls the distribution range of interactive patterns belonging to the certain class. For the computation of d_mi, we apply the distance in kernel space <cit.>: d_mi = √(1/2(K(CG_m, CG_m)+K(P_i,P_i))-K(CG_m, P_i)) Additionally, we encourage diversity in interactive patterns by adding the diversity loss, which penalizes interactive patterns that are too close to each other: ℒ_div = ∑_c=1^C∑_P_i, P_j∈𝒫_cmax(0, sim(P_i, P_j)-δ_2) where 𝒫_c denotes the interactive patterns belonging to class c and δ_2 is the threshold for similarity measurement. ℒ_IPM = α_3ℒ_mul+α_4ℒ_div where α_3 and α_4 control the ratio of the loss terms. §.§ Interpretable Classification with interactive patterns §.§.§ Classification and Learning Objective Finally, the T similarity scores between the coarsened graph and each interactive pattern are fed into the fully connected layer to obtain the output logits. Then, the logits processed with softmax to yield the probability distribution h_i for a given graph G_i. To ensure the accuracy of the proposed framework, we apply a cross-entropy loss to leverage the supervision from the labeled set: ℒ_CE = 1/M∑_i=1^MCrsEnt(h_i,y_i) where y_i is the true label of input graph. To sum up, the objective function we aim to minimize is: ℒ = ℒ_CE + β_1ℒ_CA+β_2ℒ_IPM where ℒ_CA and ℒ_IPM are loss terms of the clustering assignment module and interactive patterns matching module, β_1 and β_2 control the contribution of these loss terms. §.§.§ Explainability From the class perspective, the learned interactive patterns 𝒫 reveal the cluster-level interaction characteristics of the graphs in each class. From the instance perspective, for the test graph G_t, we can identify the most similar interactive pattern in class ŷ_t with G_t as the instance-level explanation: Ĝ_̂t̂^̂*̂ = max_P_i∈𝒫^ŷ_t sim(G_t,P_i) where 𝒫^ŷ_t is the set of interactive patterns belonging to class ŷ_t. Since the prediction of G_t is based on several patterns, the instance-level explanation can be several similar patterns in class ŷ_t, thereby bringing deeper insights into the graph itself. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets In the experiment, we use five real-world datasets with different characteristics (e.g., size, density, etc.) for graph classification. Additionally, to better demonstrate the explainability provided by our framework, we design two synthetic datasets. The specific information of the datasets is as follows: * Real-world Datasets: To probe the effectiveness of our framework in diffrent domains, we use protein datasets including ENZYMES, PROTEINS <cit.>, D&D <cit.>, molecular dataset MUTAG <cit.> and scientific collaboration dataset COLLAB <cit.>. The statistics of the datasets are presented in Appendix <ref>. * Synthetic Datasets: To better demonstrate the interpretability of our framework, we design two synthetic datasets: GraphCycle and GraphFive. Their labels are based on the interactive patterns between local structures. GraphCycle consists of two classes: Cycle and Non-Cycle, while GraphFive consists of five classes: Wheel, Grid, Tree, Ladder, and Star. The specific implementation details are presented in Appendix <ref>. §.§.§ Baselines We extensively compare our framework with the following three types of baselines: * Widely Used GNNs: We compare the prediction performance with the powerful GNN models including GCN <cit.>, DGCNN <cit.>, Diffpool <cit.>, RWNN <cit.> and GraphSAGE <cit.>. * Post-hoc Explainable GNNs: We compare the explanation performance with the post-hoc explainable methods including GNNExplainer <cit.>, SubgraphX <cit.> and XGNN <cit.>. * Interpretable GNNs: We compare the prediction and explanation performance with interpretable models including ProtGNN <cit.>, KerGNN <cit.>, π-GNN <cit.>, GIB <cit.>, GSAT <cit.> and CAL <cit.>. More experimental settings will be presented in Appendix <ref> §.§ Quantitative Analysis To validate the effectiveness of our framework, we first compare it with the baselines in terms of prediction and explanation performance on several graph classification datasets. §.§.§ Prediction Performance To demonstrate the effectiveness of our approach in providing accurate predictions, we choose classification accuracy and F1 scores as evaluation metrics, and compare them with widely used GNNs and interpretable GNNs on both real-world and synthetic datasets. We apply three independent runs and report the average results along with the standard deviations in Table <ref>. From the Table <ref>, we can observe that: * Our framework achieves superior prediction performance compared to most of widely used GNNs. Specifically, in terms of classification accuracy, our framework outperforms widely used GNNs on six of the seven datasets. Particularly on MUTAG, our framework outperforms widely used models by 5.66%~35.79%. Furthermore, for the dataset in which our framework lagged behind (D&D), our framework only falls behind by 0.5% compared to the best-performing widely used model. For the F1 score metric, our framework surpasses all widely used baselines in two of the seven datasets. Additionally, it achieves second-best performance in three datasets. In the remaining two datasets, it also performs comparably to most of widely used baselines. * Our framework significantly outperforms the leading interpretable GNNs in prediction performance. On four of the seven datasets, our framework exceeds previous interpretable methods in terms of both accuracy and F1 score. On the remaining three datasets, although its accuracy/F1 score is slightly lower than the best-performing interpretable method, it still maintains the best performance in another metric. This demonstrates that our framework can consistently learn high-quality patterns for accurate predictions on different datasets; while simply selecting subgraphs might result in sub-optimal results. §.§.§ Explanation Performance We further compare the explanation performance of our method with that of interpretable methods and post-hoc explainable methods with three evaluation metrics, including explanation accuracy, consistency and silhouette score. We perform three independent runs and report the average results. * Explanation Accuracy. We use trained GNNs to predict the explanations produced by different methods and take the confidence score of the prediction as the accuracy of the explanation <cit.>. We compare our framework with interpretable methods and post-hoc explainable methods, the results are shown in Figure <ref>. Compared to previous interpretable methods, our method exhibits the highest explanation accuracy in five out of seven datasets, and achieves the second-best performance in the remaining dataset. Compared to post-hoc explainable methods, our method also achieves the highest explanation accuracy on most datasets. * Consistency. In the two synthetic datasets, we calculate the similarity between the explanations produced by different methods and the ground-truth. Here, we use the normalized results of random walk graph kernel as the measure of similarity. The results are presented in Table <ref>. Our framework outperforms other baselines by a significant margin across all datasets. This indicates that our framework can provide more accurate explanations. * Silhouette Score. High-quality interactive patterns can tightly cluster instances in dataset. Therefore, we use generated interactive patterns as centers to assign each graph to the nearest interactive pattern and then calculate the silhouette scores <cit.> to evaluate the compactness and separability of the clusters. We compare our method with another prototype-based approach ProtGNN, and the results are shown in Table <ref>. Our method consistently achieves better performance on all datasets, which further demonstrates that our framework can obtain more representative patterns. §.§ Qualitative Analysis To qualitatively evaluate the performance, we visualize the obtained interative patterns of our framework. From class perspective, we present the explanations on the synthetic dataset GraphCycle by visualizing part of the interactive patterns of different classes. The results is shown in Figure <ref>(a). We can find that our framework manages to learn patterns that are consistent with the ground-truth of “Cycle” and “Non-Cycle”. For comparison, we also show the identified explaintions of another methods (ProtGNN) that can provide class-level explanations, the results are shown in Figure <ref>(b). It can be observed that the explanations identified by ProtGNN do not exhibit distinctiveness across different classes. The reason may lie in the fact that the GraphCycle dataset does not exhibit distinctive properties in local structures, and the method based on subgraph exploration fail to capture the interactions between local substructures, thus resulting in weaker explanations. Therefore, we believe that our framework is able to unveil representative global patterns. More results of the explanation from class perspective will be presented in Appendix <ref>. From instance perspective, we identify one or more interaction patterns similar to the input graph in the decision-making process of the model to serve as instance-level explanations. §.§ Efficiency Study In this section, we compare the efficiency of our proposed framework with several interpretable baselines. In Table <ref>, we show the time required to finish training for each interpretable model. It can be observed that the efficiency of our method is only slightly inferior to KerGNN and π-GNN. According to the analysis above, our method outperforms both KerGNN and π-GNN in terms of both prediction performance and explanation performance. Therefore, we believe that the slight additional time cost is worthwhile. §.§ Ablation Studies In this section, we perform ablation studies of our framework to explore the impact of different experimental setups on the effectiveness of the framework and explore the role of different modules. Due to space limitations, we only present a portion of results here. More results will be shown in Appendix <ref>. §.§.§ Influence of the Number of Compression Blocks First, we investigate the effect of the number of compression blocks L and the compression ratio q, where q represents the ratio of the number of nodes after compression to the number of nodes before compression. We alter the values of L and q as {1, 2} and {0.1, 0.2, 0.3, 0.5}. We conduct experiments on GraphCycle dataset, and the results of classification accuracy and explanation accuracy are presented in Figure <ref>. We can find that when the compression ratio is too high or too low, there is a degradation in both classification accuracy and explanation accuracy. This may be due to the fact that when the compression ratio is too low, the presence of noisy structures may interfere with the extraction of global information, while a high compression ratio may result in the loss of some information. Additionally, we also find that the effect of the number of compression blocks on the results varies with different compression ratios. Therefore, it is crucial to select appropriate number of compression blocks and compression ratios for optimal model performance. §.§.§ Influence of the Number of interactive patterns Then, we vary the number of interactive patterns per class T/C as {2, 4, 6, 8, 10} to investigate its impact to our framework. We report the results on four datasets in Figure <ref>. We find that with an increase in the number of interactive patterns, both the classification accuracy and explanation accuracy will initially increase and then decrease. When the number of interactive patterns is too small, they cannot represent all instances in the dataset, resulting in poor prediction performance. When the number of the interactive patterns is too large, we may obtain excessively similar interactive patterns. In such cases, the prediction performance may be worse. The above observations also pave a way for selecting optimal number of interactive patterns in our framework. §.§.§ Influence of Different Modules We adopt clustering assignment module and interactive patterns matching module in our framework. In order to explore the contribution of these two modules, we implement two variants: (i) without interactive patterns matching module and (ii) without clustering assignment module. As shown in Figure <ref>, we can find that the performance is slightly inferior when the two modules are used individually, while the combination of these two modules achieve the best performance. Such merits stem from the fact that the combination of these two modules can help to identify the common characteristics in the graphs from the perspective of the global structure interactions, thus effectively enhancing the depth of information mining in graphs. § CONCLUSION In this article, we explore a novel intrinsically explainable graph classification task, called Global Interactive Pattern (GIP) learning. In contrast to previous methods which focus on exploring local subgraphs for explanation, we propose to analyze cluster-level interaction patterns from a global perspective for attribution analysis. To this end, we construct a two-stage framework for implementing GIP, by first performing compression of the graph and then identifying interactive patterns of the coarsened graphs to determine the intrinsic explanations. Extensive experiments on real-world datasets and synthetic datasets demonstrate the effectiveness of our framework in terms of prediction and explanation performance. This also signifies the value of mining interactive patterns from a global perspective to some extent. Therefore, our work paves a novel path for interpretable graph classification. In the future, we will further explore this task and endeavor to extend our method to more practical scenarios. This work was supported in part by the Joint Funds of the Zhejiang Provincial Natural Science Foundation of China under Grant LHZSD24F020001, in part by the Zhejiang Province “LingYan" Research and Development Plan Project under Grant 2024C01114, and in part by the Zhejiang Province High-Level Talents Special Support Program “Leading Talent of Technological Innovation of Ten-Thousands Talents Program" under Grant 2022R52046. ACM-Reference-Format § MORE IMPLEMENTATION DETAILS §.§ Datasets ENZYMES is a proteins dataset from the BRENDA database <cit.>. It comes with the task of classifying the enzymes to one out of six EC top-level classes. Specific statistics of the dataset are shown in Table <ref>. PROTEINS is a dataset of proteins from Dobson and Doig dataset <cit.>. It comes with the task of classifying proteins into enzymes and non-enzymes. Specific statistics of the dataset are shown in Table <ref>. D&D <cit.> is a dataset containing high-resolution proteins extracted from a non-redundant subset of the Protein Data Bank. Nodes are amino acids, and two nodes are connected by an edge if the distance between them is less than 6 angstroms. Specific statistics of the dataset are shown in Table <ref>. MUTAG <cit.> is a molecular property prediction dataset, where nodes are atoms and edges are chemical bonds. Each graph is associated with a binary label based on its mutagenic effect. Specific statistics of the dataset are shown in Table <ref>. COLLAB <cit.> is a scientific collaboration dataset. A graph corresponds to a researcher’s ego network, i.e., the researcher and its collaborators are nodes and an edge indicates collaboration between two researchers. A researcher’s ego network has three possible labels, i.e., High Energy Physics, Condensed Matter Physics, and Astro Physics, which are the fields that the researcher belongs to. Specific statistics of the dataset are shown in Table <ref>. GraphCycle is a self-designed synthetic dataset. Specifically, we first generate 8~15 Barabási-Albert graphs as communities, each containing 10~200 nodes. Then, we connect the generated BA graphs in pre-defined two shapes: Cycle and Non-Cycle. To connect nodes in different clusters, we randomly add edges with a probability ranging from 0.05 to 0.15. Specific statistics of the dataset are shown in Table <ref>. GraphFive is a self-designed synthetic dataset. Specifically, we first generate 8~15 Barabási-Albert graphs as communities, each containing 10~200 nodes. Then, we connect the generated BA graphs in pre-defined five shapes: Wheel, Grid, Tree, Ladder, and Star. To connect nodes in different clusters, we randomly add edges with a probability ranging from 0.05 to 0.15. Specific statistics of the dataset are shown in Table <ref>. §.§ Hyper-parameter Settings The hyper-parameters used in our framework include batch size, optimizer, learning rate, epoch, the α_1 and α_2 for controlling loss terms in clustering assignment module, the α_3 and α_4 for controlling loss terms in interactive patterns matching module, the β_1 and β_2 for controlling the contribution of the two modules, etc. The specific settings are presented in Table <ref>. § MORE CLASS-LEVEL EXPLANATIONS In this section, We will provide more visualization results of class-level explanations on different datasets. We visualize the global interactive patterns identified in the PROTEINS, D&D, and GraphFive datasets as explanations from class perspective. The results are shown in Figure <ref>, Figure <ref>, and Figure <ref>. It can be easily observed that the interaction patterns exhibit commonalities within the same class, while also displaying a certain degree of differentiation between different classes. For example, in the PROTEINS dataset, the interaction patterns in enzymes exhibit more numerous and longer loops, as well as tighter connections, compared to the interaction patterns in non-enzyme. This observation provides us with new insights to distinguish graphs with different property in the absence of expertise. In the future, we will cooperate with domain experts to conduct more comprehensive analysis. Similarly, in the GraphFive dataset, the identified interaction patterns in different classes exhibit shapes similar to our pre-defined ground-truth. Therefore, our framework is capable of mining representative interaction patterns in graphs of different classes. § MORE ABLATION STUDIES §.§ Influence of the Compression Blocks In this section, we continue the discussion in Section <ref>, and analyze the influence of the number of compression blocks and compression ratios on the model performance with D&D and GraphFive datasets. We present the results in Figure <ref>. It can be seen that for different datasets, the appropriate number of compression layers and compression ratios vary, further confirming the discussion in Section <ref>. However, in most cases, fewer compression layers and moderate compression ratios will yield better results. §.§ Influence of the Number of Interactive Patterns In this section, we supplement the work in Section <ref> and demonstrate the variations in model performance with changes in the number of interactive patterns per class on the ENZYMES, COLLAB, and GraphFive datasets. The results are shown in Figure <ref>. We further note that changes in the number of interaction patterns have different effects on prediction performance and explanation performance, which requires us to further consider the balance between prediction performance and explanation performance to determine the appropriate number of interaction patterns. §.§ Influence of Different Modules In this section, we present more results about the influence of different modules on the model performance. The results on ENZYMES, COLLAB, and GraphFive datasets are shown in Figure <ref>. These results show the same trend as in Section <ref>, i.e., the combination of the two modules achieves better results, which can indicate that our two-stage framework is effective.
http://arxiv.org/abs/2407.02422v1
20240702164901
Close, But Not There: Boosting Geographic Distance Sensitivity in Visual Place Recognition
[ "Sergio Izquierdo", "Javier Civera" ]
cs.CV
[ "cs.CV" ]
Close, But Not There S. Izquierdo and J. Civera I3A, University of Zaragoza, Spain {izquierdo, jcivera}@unizar.es Close, But Not There: Boosting Geographic Distance Sensitivity in Visual Place Recognition Sergio Izquierdo0000-0002-5639-5035 Javier Civera0000-0003-1368-1151 ========================================================================================== § ABSTRACT Visual Place Recognition (VPR) plays a critical role in many localization and mapping pipelines. It consists of retrieving the closest sample to a query image, in a certain embedding space, from a database of geotagged references. The image embedding is learned to effectively describe a place despite variations in visual appearance, viewpoint, and geometric changes. In this work, we formulate how limitations in the Geographic Distance Sensitivity of current VPR embeddings result in a high probability of incorrectly sorting the top-k retrievals, negatively impacting the recall. In order to address this issue in single-stage VPR, we propose a novel mining strategy, CliqueMining, that selects positive and negative examples by sampling cliques from a graph of visually similar images. Our approach boosts the sensitivity of VPR embeddings at small distance ranges, significantly improving the state of the art on relevant benchmarks. In particular, we raise recall@1 from 75% to 82% in MSLS Challenge, and from 76% to 90% in Nordland. Models and code are available at https://github.com/serizba/cliquemininghttps://github.com/serizba/cliquemining. § INTRODUCTION Visual Place Recognition (VPR) refers to identifying a place from a query image ℐ_q ∈ℝ^w × h × 3, which boils down to retrieving the k closest images {ℐ_1, , ℐ_k} from a database where they are georeferenced. VPR is fundamental in several computer vision applications. It constitutes the first stage of visual localization pipelines by providing a coarse-grain pose that reduces the search space in large image collections. This pose can be later refined by robust geometric fitting from local feature matches <cit.>. It is also essential in visual SLAM, in which it is used to detect loop closures and remove geometric drift <cit.>, or as the basis for topological SLAM <cit.>. In VPR pipelines, every RGB image ℐ_i is typically mapped to a low-dimensional embedding x_i ∈ℝ^d by a deep neural network f_θ : ℐ_i → x_i that extracts and aggregates visual features that are relevant for the task. The closest samples are retrieved by a nearest-neighbour search using distances in the embedding space d_i^e = ||x_q - x_i||_2, which hopefully correspond to the views with smallest geographic distance d_i^g = ||p_q - p_i||_2 between them, with p_i ∈ℝ^3 standing for the camera position for ℐ_i. The challenge lies on learning the wide variability in the visual appearance of places, caused among others by environmental, weather, seasonal, illumination and viewpoint variability, or dynamic content. Recent years have witnessed significant advances in VPR, driven among others by enhanced network architectures <cit.>, loss functions <cit.>, or two-stage re-ranking strategies <cit.>. In this work, we start by analyzing the Geographic Distance Sensitivity (GDS) of VPR embeddings, that can be illustrated by a plot of the distribution of embedding distances d^e vs. geographic distances d^g, as in the centre of <ref>. The plot shows two cases: in orange the distribution a typical VPR pipeline would achieve, and in blue the distribution that would be obtained by a model with enhanced GDS, result of training using our novel CliqueMining, which we will introduce later. Note how a high variance and a small slope results in a high probability of incorrectly sorting the top-5 retrievals. The top-1 retrieval on the left is, as it is written in the title, close but not there. By decreasing the variance and increasing the slope the probability of an incorrect ordering decreases. <ref> shows this phenomenon occurring in real datasets when using the state-of-the-art baseline DINOv2 SALAD <cit.>. Observe how the top-5 retrievals without our CliqueMining in MSLS <cit.> and Nordland <cit.> are not properly sorted by real geographic distance. While two-stage re-ranking approaches might assist in alleviating this, their local feature matching stage come with a prohibitive storage and computational footprint. Additionally, recent methods using only global features <cit.> already surpass those that involve local features for re-ranking. Although mining strategies also aim to improve performance by compiling informative batches during training, existing strategies are not specifically tailored to enhance GDS in densely sampled data. In addition to analyzing GDS, in this work we propose a novel mining strategy, CliqueMining, explicitly tailored to address it. Our hypothesis is that, in order to boost the GDS, the training batches should include images of highly similar appearance at small distances, that are not explicitly searched for in current mining schemes. We achieve that by organizing our training samples as a graph from which we extract cliques that represent sets of images that are geographically close. Our experiments show that, in in this way, using CliqueMining on top of a baseline model obtains substantial improvements in recall metrics. § RELATED WORK Early approaches to VPR were mainly based on aggregating handcrafted features,  <cit.>. More recent ones have sometimes used deep backbones that were pre-trained in supervised <cit.> and unsupervised <cit.> setups, showing better generalization and performance. However, the state of the art has been typically represented in the last years by deep models specifically trained or fine-tuned for VPR tasks, , <cit.>. For the general perspective and evolution of VPR over the last years, too vast to be fully referenced here, the reader is referred to existing tutorials and surveys focused specifically on VPR <cit.> or on general content-based image retrieval <cit.>. In the rest of the section, we will only explicitly cite those works that are most related to our contribution. Overall, training details matter in image retrieval, and are task-specific. Typically, contrastive <cit.> and triplet <cit.> losses are used to train a deep model that maps images into an embedding space, in which similar samples are close together and dissimilar ones are far apart. Although other losses have been proposed in the literature,  <cit.>, Musgrave  <cit.> and Roth  <cit.> showed a higher saturation than the indicated in the literature. The particularities of VPR, however, can be leveraged in task-specific losses. For example, Leyva-Vallina  <cit.> grade similarity based on spatial overlap to make losses more informative. Ali-bey  <cit.> showed that the multi-similarity loss <cit.> can be effectively used for VPR tasks. They curated a dataset, GSV-Cities, and organized it on sparse places that, combined with the multi-similarity loss led to significant performance gains. As other recent works <cit.>, our contribution builds on top of the multi-similarity loss on GSV-Cities. However, the sparse nature of the GSV-Cities dataset <cit.> limits the GDS of the models in densely sampled data, present in many benchmarks <cit.>. We argue that densely sampled data is relevant in VPR as it is a prevalent condition in numerous applications, owing to the proliferation of mobile computational platforms capturing video (such as cars, drones, glasses and phones) and the availability of tools to crowdsource and store big data. Mining informative batches matters as much or even more than the chosen losses <cit.>. “Easy” samples contribute with small loss values, which may slow down or plateau the training <cit.>. On the other hand, using only “hard” samples produces noisy gradients and may overfit or converge to local minima <cit.>, which suggests a sweet spot in mixed strategies <cit.>. As another taxonomy, mining can be done offline after a certain number of iterations <cit.>, with high computational costs, or online within each batch <cit.>. In practice, “Hard” negatives samples are typically used, as they are easy to mine and informative <cit.>. “Hard” positive mining <cit.> is more challenging to implement, as it is sometimes caused by occlusions, large scale changes or low overlap, which may be misleading and harm generalization <cit.>. Wang  <cit.> generalizes sampling schemes by weighting pairs in the multi-similarity loss according to their embedding distance. None of the mining approaches in the literature, however, addresses GDS aspects as we do with our CliqueMining. § GEOGRAPHIC DISTANCE SENSITIVITY IN VPR As already said, <ref> shows examples of DINOv2 SALAD <cit.> retrievals on MSLS Train <cit.> and Nordland <cit.>. Although the recall@1 for these specific queries is zero, dismissing the model's performance as entirely inaccurate would be unfair. Within the top-5 retrievals, some predictions are indeed correct, and most incorrect predictions are relatively close to the decision threshold. These examples uncover a common issue in VPR models: their inability to finely discriminate between similar viewpoints. Note how our novel CliqueMining, that we will describe in next sections, discriminates better for this particular case. We explain this phenomenon using the concept of Geographic Distance Sensitivity (GDS), i.e., the model's ability to assign smaller descriptor distances to pairs of images that are geographically closer. VPR models should have a high GDS, that is, they should produce descriptors that maximize the probability P(d_i^e < d_j^e | d_i^g < d_j^g). Seeking for a high GDS requires two desiderata to hold. (i) The expected value of the descriptor distance of a pair should be smaller than that of a pair geographically further from the query 𝔼[d_i^e - d_j^e | d_i^g < d_j^g] < 0. (ii) The dispersion of descriptor distances conditioned on a certain geographic distance should be as small as possible 𝔼[(d_i^e - 𝔼[d_i^e | d_i^g])^2 | d_i^g ] → 0. Failing to achieve these two leads to a high probability of retrieving an incorrect order of candidates. We hypothesize that VPR models struggle to precisely rank between closely spaced locations due to their limited GDS at small distance ranges. This is because current training pipelines are effective at achieving highly invariant representations that encode viewpoints coarsely, but not at learning the subtle cues to disambiguate between close frames. This effect can be further assessed in <ref>, which shows the top-{1,5,10} recall of the baseline DINOv2 SALAD for different threshold values. The vertical green dashed lines represent the typical thresholds of 25 meters and 1 frame used in MSLS and Nordland. Note how the recall, specially the recall@1, keeps increasing for slightly larger values than the 25 meters and 1 frame thresholds. This indicates that a significant fraction of false negatives is very close to the decision threshold, which lowers the recall. With our novel CliqueMining strategy, detailed in next section, the reader will assess how we are able increase the GDS for small ranges (Fig. <ref>) and consequently improve recall metrics, as we will show in the experimental results § CLIQUEMINING Our novel mining strategy, CliqueMining, selects challenging batches according to geographic and descriptor similarity criteria, alleviating the GDS issues identified in Section <ref>. <ref> shows an overview of our method. To effectively mine a challenging batch, we first build a graph of image candidates (<ref>) and sample places from it (<ref>). Finally, we select challenging pairs and train the network using the Multi-Similarity loss (<ref>). §.§ Graph Creation In contrast with the sparse nature of viewpoint sampling in GSV-Cities <cit.>, we propose to use denser batches, with higher spatial continuity, so the the network also learns the subtle changes resulting from small camera motion. To effectively mine such challenging batches, we first create a graph, G=(V, E), representing a cluster of candidates. Vertices from this graph, v_i∈ V, are frames from sequences with very similar appearance, and two vertices, v_i and v_j, are connected by an edge e_ij∈ E if both frames lie within a given distance threshold in meters, τ. E = {e_ij | d(v_i, v_j) < τ , ∀ v_i, v_j ∈ V} To populate the graph, we consider all image sequences as defined in the MSLS training set, as our place-based batches do not require a split between query and database images. We start by sampling a reference sequence from a city, s_ref, and subsequently, sampling S more different sequences, {s_1, …, s_S} based on their similarity with s_ref. For computational efficiency, we determine the similarity between two sequences by only comparing the descriptors of their respective central frames. We incorporate every frame from these sequences into the graph, which ensures the presence of adjacent frames within the batches. Edges are determined by the Universal Transverse Mercator (UTM) locations of each frame. <Ref> summarizes this process. SE[REPEATN]RepeatNEndRepeatN[1] #1 times §.§ Place Sampling To construct a single batch, we start from the graph of candidates G, generated as explained in <ref>. G is a convenient representation for place sampling, as it facilitates the identification of distinct viewpoints yet of highly similar appearance, and labels are easily assigned based on connectivity. In our pipeline, we mine batches of N places, each place defined as a set of K images, where each image is within a range τ of each other. Sampling a place is equivalent to finding a clique, C, within G C ∼{C | ∀ v_i, v_j ∈ C, e_ij∈ E, C ⊆ V, |C| = K}. Thus, to compile a batch of N places, we iteratively extract N cliques from G. After finding each clique, all its frames, as well as their connected vertices are removed from G. This prevents overlap in subsequent cliques, ensuring that each sampled place is at least τ meters from each other. In the uncommon case of exhausting all cliques in G, we create a new graph starting from a new s_ref and continue the process. The resulting batches, an example of them shown in <ref>, showcase highly similar yet far apart images, illustrating the effectiveness of our sampling to create difficult batches. <Ref> gives an overview of the sampling procedure. §.§ Training Pipeline In practice, we mine a large set of batches offline and once, as described in <ref> and <ref>, and use them during all epochs. To do this, we use the embeddings from a model pre-trained without CliqueMining. Most mining strategies are typically updated every few iterations. However, this increases the computational overhead, and for our CliqueMining we did not observe any improvement by updating the batches. In order to smooth the gradients from our hard training images, we combine them with images from GSV-Cities. In this manner, we include per batch half of the images from our CliqueMining and half from GSV-Cities, so the network can learn both the fine-grain GDS and the sparse discriminative capabilities from GSV-Cities. As we use the Multi-Similarity (MS) loss <cit.>, during training we use their online selection method for weighted negative and positive pairs. A negative pair, {x_i, x_j}, is selected from a batch if its distance is lower than the hardest positive pair plus a margin, ϵ, ||x_i - x_j||_2 < max_d_ik^e<τ||x_i - x_k||_2+ϵ, and, conversely, a positive pair is selected when ||x_i - x_j||_2 > min_d_ik^e≥τ||x_i - x_k||_2-ϵ. § EXPERIMENTS In this section, we re-train state-of-the-art VPR baseline models using our proposed CliqueMining. Evaluation on various benchmarks showcases the increased discriminative capacity of the models. In the following, we describe the implementation details, benchmarks used, quantitative and qualitative results, as well as ablation studies. §.§ Implementation Details We use CliqueMining with the recent DINOv2 SALAD <cit.>, the current state-of-the-art VPR model as well as on MixVPR <cit.>, a recent model with competitive performance. For each of them, we use their codebase and rigorously follow their training pipelines and hyperparameters. We use batches of size 60 in DINOv2 SALAD and 120 in MixVPR, where half of the places come from our pipeline and the other half from GSV-Cities. We create a new graph for every batch. We start by sampling s_ref from the set of existing sequences. We then sample S=15 sequences from the same city based on the descriptor similarity of their central frames. Edges are assigned with τ=25. Cliques are searched using the NetworkX library[<https://networkx.org/>] using the unrolled algorithm by Tomita  <cit.>. We create offline a large collection of 4000 batch examples before starting the training, and at every iteration, we randomly select one of those. To create the batches we use all the non panoramic images in the MSLS Training set. For the ablation studies we divided this dataset in val and train subsets, setting Melbourne, Toronto, Paris, Amman, Nairobi and Austin for val and the rest 16 cities for train. §.§ Results We evaluate the effect of our CliqueMining by comparing the performance of two recent high-performing models, DINOv2 SALAD <cit.> and MixVPR <cit.>, with and without it at training time. We also benchmarked these against classic methods, namely NetVLAD <cit.> and GeM <cit.>, and recent performant baselines, specifically CosPlace <cit.>, EigenPlace <cit.>, and SelaVPR <cit.>. Additionally, we include in the comparison results of SelaVPR <cit.> with re-ranking, as it is the current state of the art among two-stage techniques. We report results on standard evaluation datasets. Nordland <cit.> is a continuous video sequence taken from a train traveling through Norway across different seasons. The difficulty of this dataset arises from the substantial appearance differences between query (summer) and reference (winter), as well as the dense temporal sampling. MSLS Challenge and Validation <cit.> is a large and dense collection of dashcam images recorded in cities around the globe. The various seasonals, time, and environmental changes depicted make it one of the least saturated datasets in VPR. Pittsburgh-250k <cit.> is known for its significant viewpoint changes, but current pipelines have highly saturated performance. As previous works, we report recall@{1,5,10}, which measures the rate of correct predictions among the top-{1,5,10} retrieved images. An image is considered correct if it lies within a 25 meters-radius circle from the query, or at most one frame apart for the Nordland dataset. Results are reported on <Ref>. On Nordland, training with our CliqueMining significantly improves both DINOv2 SALAD and MixVPR, obtaining, for the first time, a recall@1 bigger than 90% (+14.7% over the closest baseline). This milestone highlights how our hard batches help in boosting the network's GDS. This is a crucial aspect in Nordland, where the high similarity between video frames and the strict one-frame distance threshold need outstanding sensitivity. Note that CliqueMining also improves significantly the recall rates for MixVPR. On MSLS Challenge and Validation, our CliqueMining with the DINOv2 SALAD architecture improves over all previously reported results. The improvement is most notable on the Challenge, where CliqueMining raises +7.7% the recall@1. While training on the MSLS Train dataset contributes to these results, it is noteworthy that SelaVPR, which also trains on MSLS, does not achieve a comparable performance, even with re-ranking. The effect of CliqueMining on MixVPR is dimmer, although it also improves over the baseline without it. We argue that its global aggregation smooths out local details, which are critical for raising the GDS. On Pittsburgh-250k, our pipeline obtains a slight improvement over the baseline DINOv2 SALAD and obtains comparable performance to SelaVPR with re-ranking. We outperform SelaVPR without re-ranking, which is a more comparable baseline. Note, in any case, that SelaVPR is fine-tuned on Pittsburgh30k before testing on Pittsburgh250k, while ours was trained in GSV-Cities and MSLS. MixVPR with CliqueMining downgrades performance. Training on MSLS data, where almost all images are forward-facing, has a small impact on Pittsburgh250k, which exhibits substantial viewpoint variability. Note how we sorted the datasets in <ref> from more to less image density, and how this also sorted naturally the recall@1 gains of CliqueMining from bigger to smaller. This supports our observation that GDS issues are more relevant the higher the image density, and that CliqueMining is able to improve them. From these results we can also conclude that a substantial part of the challenge in the less saturated VPR datasets (Nordland and MSLS) is associated to GDS issues, which is a relevant insight. Observe in <ref> the effect of CliqueMining on the GDS of the DINOv2-SALAD model <cit.> in MSLS and Nordland, as a plot of the distribution of the pairwise descriptor distances for different geographic distances. As sought, the GDS is highly boosted (steep curve and low dispersion) by CliqueMining for close geographic distances. Observe the similarity of this result with the illustrative graph in <ref>. Although not specifically tailored for, CliqueMining also reduces the dispersion for large distances, probably due to leveraging batches with more informative gradients. This enables the model to correctly sort candidates that are near, and still discriminate from those too far apart. We finally remark the low computational footprint of our CliqueMining. CliqueMining is a mining strategy for training, and hence does not increase at all the computational footprint at inference. This is in contrast to two-stage methods, that increase it by a factor of several orders of magnitude. Additionally, the overhead is modest at training. Our ablations shows that the graph creation only needs to be done once before training, and there is no benefit in updating it. In total, the computational overhead of CliqueMining roughly amounts to only 20% of the total training time in our experiments. §.§ Ablation Study We conduct evaluations with different configurations of CliqueMining to assess the importance of its different components. We base all our ablation studies on the DINOv2 SALAD baseline. CliqueMining or training on more data. One of the key contributions of this work is to train state-of-the-art models on a combination of GSV-Cities and MSLS. This raises the question of whether the observed improvements result from training with more data or from CliqueMining. To evaluate this, we re-train DINOv2 SALAD on a combination of GSV-Cities + MSLS without CliqueMining. Thus, batches from MSLS are organized in triplets as usually done in the literature. <ref> shows how, although training on MSLS slightly increases performance, using CliqueMining produces the best results, specially for R@1. We also report, for this ablation, results on Nordland which show more pronounced differences with CliqueMining. This suggest that naïvely training on more data brings limited improvements. CliqueMining creates challenging batches that improve the sensitivity of the model and its recall. Besides, CliqueMining organizes the images in places, so every image can simultaneously act as an anchor, positive or negative, increasing the number of pairwise relations on a batch. Geographic distance threshold τ. We tested the effect of the τ values in the range 10-30. As shown in <ref>, using the typical decision threshold value τ=25 achieves the best performance. Multi-Similarity (MS) mining. We built our CliqueMining on top of <cit.>, keeping its online mining (<ref>). Deactivating it, keeping only our CliqueMining, has a detrimental effect (see <ref>), which indicates that both mining strategies are compatible. Sequence sampling. We evaluate the effect of different sampling strategies to obtain {s_1, …, s_S} during the graph creation. We specifically try a weighted sampling according to similarity, selecting the top S most similar sequences, or randomly. <ref> shows that all three sampling strategies obtain very similar results, but using the most similar sequences produces the best. We argue that the online mining from <ref> reduces the actual differences between the used selection criteria, as it will further select the hardest pairs. Besides, given the length of some of the sequences, more than one clique might be sampled from the same sequence, reducing the need to find other similar ones. Updating the mining every epoch. Commonly done in literature, updating the mining after every epoch using the recently updated weights can provide some benefits to performance. As shown in <ref>, obtained recalls are comparable, and computing the mining after every epoch is computationally expensive. § LIMITATIONS The main limitation of CliqueMining is that it is specifically tailored for VPR, and hence it will not be of use for general image retrieval. In addition, CliqueMining addresses GDS issues, that are mostly relevant for places that are densely sampled with images. We already reported in <ref> the diminishing returns as the sampling density decreases in the benchmarks we used. However, as we motivated in <ref>, this limitation is softened by the wide range of potential use cases falling into this condition, and also by the remarkable boost in recall@1 in the most dense sampling cases (+14.7% for Nordland). Additionally to the above, our CliqueMining is strongly dependent on the existence of GDS issues. Even if the dataset is densely sampled, there could be a lack of GDS issues, as when viewpoint changes account for the majority of variations. In this cases, the model fails to retrieve close samples, and therefore CliqueMining would not positively impact its recall. We observed this in the recent SF-XL <cit.>, a massive dataset of images from San Francisco, often used to test VPR at scale. <ref> characterizes the recall in this dataset against the decision threshold. Observe how, in contrast to <ref>, the recall is almost flat in the region immediately after the decision threshold. Enhancing the GDS is not expected to have any effect in this dataset, as the rate of false negatives due to this reason is very small. Even if this is a limitation, we would argue in our favour that every mining strategy is strongly dependent on the data, but in the case of our CliqueMining we have characterized the conditions in which it should or should not offer an improvement. § CONCLUSIONS In this paper we have identified, formulated and analyzed deficiencies in the Geographic Distance Sensitivity (GDS) of current VPR models. Specifically, we found that they struggle to correlate descriptors and geographic distances for close range views. Based on that, we propose CliqueMining, a tailored batch sampling that selects challenging visually similar places at close ranges, and in particular around the decision threshold. CliqueMining forces the model to incorporate a finer grading of the geographic distances in the embedding. Mining such hard batches is equivalent to finding cliques in a graph of similar image sequences where connectivity represents spatial proximity. Our evaluation of two recent models with and without CliqueMining confirms a boost in the GDS which in turn also boosts the recall. The boost is substantial on densely sampled and unsaturated benchmarks like MSLS Challenge or Nordland, where training with CliqueMining brings unprecedented results. splncs04
http://arxiv.org/abs/2407.01828v1
20240701215832
Folding Entropy for Extended Shifts
[ "Neemias Martins", "Pedro G. Mattos", "Régis Varão" ]
math.DS
[ "math.DS" ]
§ ABSTRACT The concept of folding entropy emerges from Ruelle's studies of entropy production in non-equilibrium statistical mechanics and is a significant notion to understand the complexities of non-invertible dynamical systems. The metric entropy (Kolmogorov–Sinai) is central in Ornstein's theory of Bernoulli shifts — it is a complete invariant for such maps. In this article we consider zip shift spaces, which extends the bilateral symbolic shift into a two-alphabet symbolic dynamical system and are ergodic and mixing systems with a chaotic behavior. A class of examples of maps isomophically mod 0 to zip shifts are the n-to-1 baker's maps, which represents a non-invertible model of deterministic chaos. We calculate the metric and folding entropies of a generic zip shift system, and relate the two. For the metric entropy, we find the general form for cylinder sets pulled-back by the shift dynamics, and use the Kolmogorov–Sinai theorem to calculate the metric entropy of the zip shift system. For the folding entropy, we find the disintegration of the zip shift measure relative to the pullback of the atomic partition, and relate it to the zip shift measure in a simple formula. Photospheric Prompt Emission From Long Gamma Ray Burst Simulations – III. X-ray Spectropolarimetry [ July 8, 2024 ================================================================================================== § INTRODUCTION Bilateral symbolic shifts have been used in dynamics to encode isomorphisms and study their dynamics. This is generally done by finding some convenient finite partition of the space and using the itinerary of points under the dynamics to establish a conjugation with the symbolic shift space. The symbolic shift space is the set of all the integer-indexed sequences of symbols from a finite set of symbols (or some shift-invariant subset of this set), and its dynamics is the shift operator, which shifts each sequence to the left. Zip shifts are a generalization of bilateral symbolic shifts that can be used to encode non-invertible dynamics, introduced in <cit.>. Instead of only one set of symbols, we consider two different sets, which can be thought of as positive and negative symbols, or numbers and letters (hence the name zip, from the use of letters and numbers on ZIP codes). The positive symbols (or numbers) encode the forward behavior of the dynamics, while the negative symbols (or letters) encode its backward behavior. The zip shifts are local homeomorphisms and are ergodic and mixing maps, quite similar in structure to the Bernoulli shifts. Moreover, they have a chaotic behavior: they are transitive maps and the periodic points are dense in the zip shift space. Dynamics that are encoded by zip shifts are named (m,l)-Bernoulli transformations, which are maps isomorphically mod 0 to a zip shift with alphabets with m and l symbols. The best-known example of such dynamics are the n-to-1 baker's transformations, which represent a non-invertible model of deterministic chaos and is a measure-preserving generalization of the usual baker's transformation (check <cit.>). Entropy has been used as a tool to distinguish different dynamics (<cit.>). Because of Ornstein's outstanding result on the classification of Bernoulli Shifts, it is natural to investigate if entropy is an invariant for some classes of (m,l)-Bernoulli transformations. As a first step in this direction, in this work, we calculate the measure-theoretic entropy of Kolmogorov-Sinai and the folding entropy <cit.> of a generic zip shift space. The folding entropy quantify the complexities of the preimages branches of non-invertible dynamical systems and coincides with the pointwise metric preimage entropy for continuous maps with uniform separation of preimages <cit.>. In this work, we relate the folding entropy with the metric entropy of extended shifts (<Ref>). In <ref>, we prove <ref>, which shows that the measure entropy of the zip shift is equal to the entropy of a partition by cylinders with positive symbols. Since the measure conjugacy preserves entropy and the folding entropy, these results provide a simple way to calculate the entropy of the (m,l)-Bernoulli transformations. In <ref>, we prove <ref>, which shows that the folding entropy of the zip shift is given by an average of entropies in the fibers of a disintgration of the measure, and is also equal to the difference of the entropies of the partitions by cylinders with positive and negative symbols. This shows that the measure entropy and the folding entropy are related. § PRELIMINARIES We will denote the natural numbers (including 0) by , the integers by and the real numbers by . We denote the strictly positive, positive, strictly negative, and negative integers by _> 0, _≥ 0, _< 0, and _≤ 0, respectively (and likewise for the other number sets). §.§ Measure spaces Let X be a set. A σ-algebra over X is a family of subsets of X, whose elements are called measurable sets, that contains the empty set and is closed under set complements and countable unions. The pair (X, ) is called a measurable space. Given any family 𝒮 of subsets of X, the σ-algebra generated by 𝒮 is the smallest (relative to ⊆) σ-algebra over X that contains 𝒮. A measure on (X, ) is a function _≥ 0 that assigns the value 0 to the empty set and is countably additive, meaning that, for every pairwise disjoint countable famility of measurable sets (M_i))_i ∈, ( ⋃_i ∈ M_i ) = ∑_i ∈(M_i). The triplet (X, ℳ, ) is called a measure space. A probability measure is a measure such that (X)=1, and the respective measure space is called a probability space. We say that a property if valid for almost every point of X when it is valid for every point of a subset of X whose complement has measure 0. A measurable transformation from a measure space (X, ) to another (X', ') is a transformation fXX' such that, for every measurable set M' ∈', its inverse image by f is measurable: f(M) ∈. A measure-preserving transformation from a measure space (X, , ) to another (X', ', ') is a measurable transformation fXX' such that, for every measurable set M' ∈', (f(M')) = '(M'). On a measure space, the integral can be defined for functions fX. We will denote the integral of f with respect to over a measurable set M ⊆ X by ∫_M f, or by ∫_x ∈ Mf(x)( x), when it is necessary to make the variable x of f explicit. §.§ Measure entropy Measure-theoretic entropy was first defined by Kolmogorov and Sinai and used as an invariant for dynamical systems over measure spaces. Here we briefly define it and state the main theorem we will use in this work, the Kolmogorov–Sinai theorem (<ref>). We refer the reader to <cit.> for the following definitions and any further information on measure entropy. Let (X, ℳ, ) be a probability space. We will refer to any finite or countable family of pairwise disjoint measurable sets whose union has measure 1 by a partition of X. (This is similar to the usual definition of a partition, but weakened by the measure structure of the space). This defines, for almost every point x ∈ X, a unique set (x) ∈ such that x ∈(x), and hence a (almost everywhere defined) projection _X, defined by _(x) := (x). A partition is coarser than a partition ' (or ' is finer than ) when, for every element P' ∈', there exists an element P ∈ such that (P' ∖ P) = 0 (which means that almost every point of P' is contained in P). This is denoted by '. We can also define an operation on the partitions: to each (finite or countable) family of partitions (_n)_n ∈ N, its correfinement is _i ∈ N_n := ⋂_n ∈ N P_nP_n ∈_n for each n ∈ N. When we have only 2 (of finitely many) partitions, we denote their correfinement by '. The correfinement of a family of partitions is the smallest partition, relative to , that is larger than every partition of the family. The entropy of is defined as () := ∑_P ∈ -(P)log((P)). (Here and in what follows, we always assume that 0 log 0 = 0.) Now let fMM be a measure-preserving transformation on (X, ℳ, ). We can define the pullback of a partition by f as f() := f(P)P ∈. This is also a partition in our specific sense. Then, for each n ∈, the n-th dynamical correfinement of is ^n := _i=0^n-1 f^-i() and the n-th bilateral dynamical correfinement of is ^± n := _i=-n^n-1 f^-i(). An element Q ∈^n is of the form Q = P_0 ∩ f(P_1) ∩⋯∩ f^-(n-1)(P_n-1), for P_i ∈, and a point x ∈ X belongs to Q if, and only if, for every 0 ≤ i ≤ n-1, f^i(x) ∈ P_i. This shows that the elements of ^n partition the space into points which have the same orbit under f for n units of time. The entropy of f relative to is the limit (f, ) := lim_n →∞1/n(^n). (Notice that ^n depends on f even though the notation does not make it explicit). The entropy of f is then the supremum of the entropies relative to all partitions with finite entropy (or, equivalently, finite partitions): (f) := sup_(f, ). This definition is very abstract and requires information about every finite partition, but there is a way to calculate the entropy of a transformation using only a sequence of partitions that have a special property. This is the content of the following prop:Kolmogorov-SinaiTheorem, which we are going to use to obtain <ref>. The proof can be found in <cit.>. Let (X, , ) be a probability space, fXX a measure-preserving transformation and (_n)_n ∈ be an increasing sequence of partitions [ That is, for every n, m ∈, if n ≤ m then _n _m. ] with finite entropy such that ⋃_i ∈_i generates (up to measure 0). Then (f) = lim_n →∞(f, _n). §.§ Disintegration of measure Given a probability space (X, , ) and a partition (we do not require the partition to be countable here), we have the (almost everywhere defined) natural projection _X. Using _ we can pushforward a probability space structure onto , namely (, , ), in which := 𝒬⊆_(𝒬) ∈ is the pushforward σ-algebra (or quotient σ-algebra) and (𝒬) := (_(𝒬)) for every 𝒬∈ is the pushforward measure (or quotient measure). Let (X, , ) be a probability space and a partition of X. A disintegration of with respect to is a family of probability measures (_P)_P ∈ on X such that * For almost every P ∈, _P(P) = 1; * For every measurable set M ∈, the transformation →, P ↦_P(M) is measurable; * For every measurable set M ∈, (M) = ∫_P ∈_P(M)( P). Intuitively, this describes the way we can relate the Lebesgue measure on a square with the Lebesgue measure on each of its vertical sections by integration using Fubini's theorem. Under certain conditions on the partition , the disintegration of a measure is unique up to measure zero <cit.> and always exists <cit.>. §.§ Conditional entropy and the folding entropy Besides defining the entropy of a partition as in <ref>, we can also define the conditional entropy of a partition relative to a partition '. We follow the approach of <cit.>. First we define, for each P' ∈', the partition induced by on P' as |_P' := P ∩ P'P ∈. Then the conditional entropy of with respect to ' is defined <cit.> using the disintegration of the measure with respect to ' by (|') = ∫_P' ∈'[_P'](|_P') _'( P'). This is a more general definition that works for non-countable partitions. In the case that the partitions are countable, we obtain the simplified formula presented in <cit.>. In <cit.> the author introduces the folding entropy for ^1 transformations. It can be defined <cit.> as the conditional entropy of the atomic partition ϵ := {x}x ∈ X with respect to its dynamical pullback f(ϵ) = f(x)x ∈ X. Let X be a probability space and fXX a measure-preserving transformation. The folding entropy of f with respect to is ℱ(f) := (ϵ| f(ϵ)). § ZIP SHIFTS Zip shifts are a generalization of bilateral symbolic shifts that was first introduced in <cit.>, and later expanded on in <cit.>. Instead of a single set of symbols S used to compose a symbolic sequence x=(x_i)_i ∈∈ S^, we consider sequences that have one type of symbols on their positive part, an another on their negative part. To be able to still define the shift transformation, a function that translates one type of symbols to the other is needed. The following definition formalizes the construction. Let S^+ and S^- be non-empty finite sets, S := (S^-, S^+) and ϕS^+S^- a surjective function. The zip shift space is the pair (_S, _ϕ) in which * the bilateral extended S-symbolic space is the set _S := x=(…, x_-1; x_0, x_1, …)∀_i ≥ 0 x_i ∈ S^+, ∀_i < 0 x_i ∈ S^-; * the zip shift with transition function ϕ is the map _ϕ_S_Sx[t] _ϕ(x)S^- ∪ S^+i x_i+1 i ≠ -1 ϕ(x_0) i = -1. To simplify notation, we denote (, ) := (_S, _ϕ). <Ref> determines the shift to take a sequence (…, x_-1; x_0, x_1, …) ∈ to the sequence (…, x_-1, ϕ(x_0); x_1, …) ∈. A measure-preserving map f:X → X defined on a Lebesgue space is a (m,l)-Bernoulli transformation if its isomorphic (mod 0) to a zip shift σ_ϕ with m=#S^- and l=#S^+. The 2-to-1 baker's map defined in <cit.> exemplifies a (2,4)-Bernoulli transformation. We omit the formal definition here, but <ref> shows how this transformation is defined on the square Q in 3 steps, <ref> shows the partitions of the square that are used to encode the system and obtain the isomorphism to a (2,4)-zip shift, and <ref> shows how these partitions iterate under the action of the dynamics over time. §.§ Measurable structure The σ-algebra ℬ of the space is the one generated by cylinder sets: for each (s,i) ∈ S^- ×_<0 or (s,i) ∈ S^+ ×_≥ 0, we define the cylinder C^s_i := x ∈x_i = s. and denote C_i,…,k^s_i,…, s_k:={x∈Σ: x_i=s_i,…,x_k=s_k}=C_i^s_i∩⋯∩ C_k^s_k. We also define the extended cylinder C^ϕ(s)_i := ⋃_s' ∈ϕ(s) C^s'_i. The next proposition shows how the dynamics acts backwards and forwards on cylinders. Let k ∈ and s ∈ S^+ ∪ S^-. Then ^-k(C^s_i) = C^s_i+k i ∉ [-k, -1] ∩ C^ϕ(s)_i+k i ∈ [-k, -1] ∩. and ^k(C^s_i) = C^s_i-k i ∉ [0, k-1] ∩ C^ϕ(s)_i-k i ∈ [0, k-1] ∩. For the inverse image, it holds that ^-1(C^s_i) = C^s_i+1 i ≠ -1 C^ϕ(s)_0 i = -1. Then, by induction, we obtain that, for every k ∈, ^-k(C^s_i) = C^s_i+k i ∉ [-k, -1] ∩ C^ϕ(s)_i+k i ∈ [-k, -1] ∩. For the direct image, it holds that (C^s_i) = C^s_i-1 i ≠ 0 C^ϕ(s)_-1 i = 0. Then, by induction, we obtain that, for every k ∈, ^k(C^s_i) = C^s_i-k i ∉ [0, k-1] ∩ C^ϕ(s)_i-k i ∈ [0, k-1] ∩. §.§ Measure structure In order to define a measure on (, ℬ), it is sufficient to define it on the cylinders C^s_i. We start with a probability measure ^+ on the symbol set S^+. Since S^+ is a finite set with atomic σ-algebra, this probability measure can be identified with a discrete probability distribution p^+ = (p^+_s^+)_s^+ ∈ S^+ (that is, for every s^+ ∈ S^+ we have p^+_s^+∈_≥ 0, and ∑_s^+ ∈ S^+ p_s^+ = 1) by defining, for each s^+ ∈ S^+, p^+_s^+ := ^+({s^+}). Using the surjective transition function ϕS^+S^-, we can pushforward this probability measure ^+ to the probability measure ^- := ϕ^+ on S^-. This is done by considering the partition {ϕ(s^-)}_s^- ∈ S^- of S^+ by the inverse images of elements of S^-. The pushforward measure of {s^-}⊆ S^- is then the sum of the measure of all the elements of ϕ(s^-) on S^+, given for each s^- ∈ S^- by ^-({s^-}) = ϕ^+({s^-}) = ^+(ϕ({s^-})) = ∑_s^+ ∈ϕ(s^-)^+({s^+}). In the same way as we did for p^+, we can identify the measure ^- with a probability distribution p^- = (p^-_s^-)_s^- ∈ S^- by setting, for each s^- ∈ S^-, p^-_s^- := p^-({s^-}). Then, for a cylinder C^s_i, we can define its measure as p^+_s if i ≥ 0 and p^-_s if i<0. Let (, ) be a zip shift space, p^+ a probability measure on S^+ and p^- = ϕ p^+ the pushforward probability measure on S^-. The probability measure on (, ) induced by p^+ is the probability measure ℬ01 defined on cylinders by (C^s_i) := p^-_s, i < 0 p^+_s, i ≥ 0 = ∑_s' ∈ϕ(s) p^+_s', i < 0 p^+_s, i ≥ 0. From the way we defined the measure on S^- by the pushforward, it is easy to show that the zip shift dynamics is measure-preserving. We just need to be careful considering the different cases. Let (, ) be a zip shift space and p^+ a probability measure on S^+. The dynamics preserves the measure . It suffices to show that, for every basic cylinder C^s_i, ((C^s_i)) = (C^s_i). We consider 3 cases: * (i ≥ 0) In this case, ^-1(C^s_i) = C^s_i+1 (<ref>). Since i+1 ≥ 1, if follows from <ref> that (^-1(C^s_i)) = (C^s_i+1) = p^+_s = (C^s_i). * (i < -1) In this case, it also holds that ^-1(C^s_i) = C^s_i+1 (<ref>). Since i+1 < 0, i follows from <ref> that (^-1(C^s_i)) = (C^s_i+1) = p^-_s = (C^s_i). * (i = -1) In this case, ^-1(C^s_i) = C^ϕ(s)_0 = ⋃_s' ∈ϕ(s) C^s'_0 (<ref>). Since i+1 = 0, it follows from <ref> that (^-1(C^s_i)) = ( ⋃_s' ∈ϕ(s) C^s'_0 ) = ∑_s' ∈ϕ(s)(C^s'_0) = ∑_s' ∈ϕ(s) p^+_s' = (C^s_i). § MEASURE ENTROPY OF ZIP SHIFTS §.§ Partitions by cylinders We begin by defining some basic partitions of our space. Let i ∈. The partition by cylinders of index i is the partition 𝒞_i := C^s_is ∈ S^+ i ≥ 0 C^s_is ∈ S^- i < 0. Let n, n' ∈. The partition by cylinders of indices from n to n' is the partition 𝒞_n, …, n' := _i = n^n'𝒞_i. The following simple prop:extended_shift_iterated_partitions sums up how the dynamics of the shift acts on these partitions. For every i ≥ 0, * ^i(𝒞_0) = 𝒞_-i; * ^-i(𝒞_0) = 𝒞_i; * ^-i(𝒞_-(i+1)) = 𝒞_-1; * 𝒞_0^n = 𝒞_0,…, n-1. * 𝒞_0^± n = 𝒞_-n,…, n-1. This is a consequence of <ref>. * Since (C^s_0) = C^ϕ(s)_-1 and ϕ is surjective, it follows that (𝒞_0) = 𝒞_-1. By induction, ^i(𝒞_0) = 𝒞_-i. * Since (C^s_0) = C^s_1, it follows that (𝒞_0) = 𝒞_1. By induction, ^-i(𝒞_0) = 𝒞_i. * Since σ^-1(C_-s^s) = C_-1^s, it follows that ^-1(𝒞_-2)) = 𝒞_-1. By induction, ^-i(𝒞_-(i+1)) = 𝒞_-1. * It follows that 𝒞_0^n = _i=0^n-1^-i(𝒞_0) = _i=0^n-1𝒞_i. * It follows that 𝒞_0^± n = _i=-n^n-1^-i(𝒞_0) = _i=-n^n-1𝒞_i. §.§ Measure entropy of the extended shifts We now calculate the metric entropy of (_S, _ϕ) and relate it to the entropy of the probability distributions p^+ and p^-. We start with the partitions 𝒞_0 and 𝒞_-1. (𝒞_0) = ∑_s ∈ S^+ -p^+_slog p^+_s and (𝒞_-1) = ∑_s ∈ S^- -p^-_s log p^-_s. It follows directly from <ref> from the simple calculations (𝒞_0) = ∑_s ∈ S^+ -(C^s_0) log((C^s_0)) = ∑_s ∈ S^+ -p^+_slog p^+_s. and (𝒞_-1) = ∑_s ∈ S^- -(C^s_-1) log((C^s_-1)) = ∑_s ∈ S^- -p^-_s log p^-_s. This shows, as could be expected, that the entropy of the partition 𝒞_0 is related to p^+, the distribution of the positive part of the zip shift , while the entropy of the partition 𝒞_-1 is related to p^-, the distribution of the negative part of . We can now calculate the measure entropy of a partition by cylinders other than the basic 𝒞_0 and 𝒞_-1. (𝒞_-n, …,0, …, n'-1) = n (𝒞_-1) + n' (𝒞_0). For every i ≥ 1, it holds that 𝒞_i = ^-i(𝒞_0) and ^-i(𝒞_-(i+1)) = 𝒞_-1 (<ref>). Since preserves the measure (<ref>), it follows that (𝒞_i) = (𝒞_0) and (𝒞_-(i+1)) = (𝒞_-1). Besides that, for any integers i < i', the partitions 𝒞_i and 𝒞_i' are independent, becasue C_i^s∩ C_i'^s' = C_i,i'^s,s' and (C_i,i'^s,s') = (C_i^s)(C_i'^s'). Thus it follows that (𝒞_-n, …,0, …, n'-1) = ( _i=-n^n'-1𝒞_i ) = ∑_i=-n^n'-1(𝒞_i) = n (𝒞_-1) + n' (𝒞_0). In particular, since 𝒞_0^n = 𝒞_0, …, n-1 (<ref>), this implies that (, 𝒞_0) = lim_n →∞1/n(𝒞_0^n) = lim_n →∞1/n n (𝒞_0) = (𝒞_0). To calculate the measure entropy of the system, we will use the Kolmogorov-Sinai theorem (<ref>). To that end we define a sequence of partitions. _n := 𝒞_0^± n = 𝒞_-n, …, n-1. We will eventually need to use the measure entropy of _n^k (check <ref>), the kth dynamical correfinement of the partition _n, so the following prop:correfinamento.cilindros shows that it is just a partition by cylinders. The proof is trickier than would be expected. Let n ≥ 1 and k ≥ 2n. Then _n^k = 𝒞_-n, …, n+k-2. The dynamical correfinement of _n is defined by _n^k = _j=0^k-1^-j(_n), so let us first calculate a generic element of the pullback partition ^-j(_n) = ^-j(C)C ∈_n. Each cylinder of _n = 𝒞_-n, …, n-1 has the form C_-n, …, n-1^s_-n, …, s_n-1 = ⋂_i=-n^n-1 C_i^s_i, with s_i ∈ S^- if i < 0 and s_i ∈ S^+ if i ≥ 0. Then ^-j(C_-n, …, n-1^s_-n, …, s_n-1) = ^-j (⋂_i=-n^n-1 C_i^s_i) = ⋂_i=-n^n-1^-j (C_i^s_i). Based on <ref>, we can separate this in 3 intersections [ In order to simplify notation, we define that intersections that have the top index strictly smaller than the bottom index should be consider to be the whole space , so that they can be ignored. In <ref>, this happens for the first intersection in the case j > n-1 (or equivalently -(j+1) < -n) and for the second itersection in the case j=0 (or equivalently -1 < -j). ] as follows: ^-j(C_-n, …, n-1^s_-n, …, s_n-1) = ⋂_i=-n^-(j+1)^-j (C_i^s_i) ∩⋂_i=-j^-1^-j(C_-1^s_-1) ∩⋂_i=0^n-1^-j (C_i^s_i) = ⋂_i=-n^-(j+1) C_i+j^s_i∩⋂_i=-j^-1 C_i+j^ϕ(s_i)∩⋂_i=0^n-1 C_i+j^s_i. Notice that in <ref>, for -n ≤ i ≤ -(j+1) and 0 ≤ i ≤ n-1 we have basic cylinders of the form C_i+j^s_i and, for -j ≤ i ≤ -1, we have extended cylinders (unions of cylinders) of the form C_i+j^ϕ(s_i) = ⋃_s ∈ϕ(s_i) C_i+j^s. This shows that ^-j(_n) is not a partition by cylinders (unless ϕ is bijective and hence the sets ϕ(s^j_i) are singletons, but this is just a regular shift, not the usual case for zip shifts). We must now calculate a generic element of _n^k = _j=0^k-1^-j(_n). To that end, for each 0 ≤ j ≤ k-1 we take cylinders C^j ∈_n, defined by C^j := C_-n, …, n-1^s^j_-n, …, s^j_n-1 = ⋂_i=-n^n-1 C_i^s^j_i with s^j_i ∈ S^- if i < 0 and s^j_i ∈ S^+ if i ≥ 0. An element of _n^k is a non-empty set of the form ⋂_j=0^k-1^-j(C^j). From <ref>, it follows that this set is given by ⋂_j=0^k-1^-j(C_-n, …, n-1^s^j_-n, …, s^j_n-1) = ⋂_j=0^k-1⋂_i=-n^-(j+1) C_i+j^s^j_i∩⋂_j=0^k-1⋂_i=-j^-1 C_i+j^ϕ(s^j_i)∩⋂_j=0^k-1⋂_i=0^n-1 C_i+j^s^j_i. This shows that a generic element of _n^k (as in <ref>) is an intersection of basic cylinders and extended cylinders (which are unions of basic cylinders). These cylinders on the right-hand side of <ref> are indexed by l := i+j, which varies between -n and n-k-2 since j varies between 0 and k-1, and i varies between -n and n-1. We wish to find conditions on the symbols s^j_i that guarantee the intersections in <ref> is non-empty. For that, we will reorganize the intersections based on the indices l and j. Define B_l to be the intersection of every cylinder and extended cylinder in <ref> that has index l. Thus ⋂_j=0^k-1^-j(C_-n, …, n-1^s^j_-n, …, s^j_n-1) = ⋂_l=-n^n+k-2 B_l, and each set B_l is an intersection that depends on a range of values of j. Since the intersection of a cylinder or extended cylinder with another cylinder or extended cylinder is non empty if they have different indices, the intersection on the right-hand side of <ref> is non-empty if, and only if, each B_l ≠∅. In what follows we shall determine the range of j for each l and find conditions on the symbols s^j_i. We separate our analysis in many cases. * (-n ≤ l ≤ -1) In this case 0 ≤ j ≤ l+n and no extended cylinder occurs. In order to have B_l ≠∅, all the relations in <ref> must be satisfied, and hence B_l = ⋂_j = 0^l+n C_l^s^j_l-j = C_l^s^0_l. * (0 ≤ l ≤ n-1) In this case, when 0 ≤ j ≤ l we have basic cylinders and when l+1 ≤ j ≤ l+n we have extended cylinders. In order to have B_l ≠∅, all the relations in <ref> must be satisfied, and hence B_l = ⋂_j = 0^l C_l^s^j_l-j∩⋂_j = l+1^l+n C_l^ϕ(s^j_l-j) = C_l^s^l_0. * (n ≤ l ≤ k-n-1) In this case, when l-n+1 ≤ j ≤ l we have basic cylinders and when l+1 ≤ j ≤ l+n we have extended cylinders. In order to have B_l ≠∅, all the relations in <ref> must be satisfied, and hence B_l = ⋂_j = l-n+1^l C_l^s^j_l-j∩⋂_j = l+1^l+n C_l^ϕ(s^j_l-j) = C_l^s^l_0. * (k-n ≤ l ≤ k-2) In this case, when l-n+1 ≤ j ≤ l we have basic cylinders and when l+1 ≤ j ≤ k-1 we have extended cylinders. In order to have B_l ≠∅, all the relations in <ref> must be satisfied, and hence B_l = ⋂_j = l-n+1^l C_l^s^j_l-j∩⋂_j = l+1^k-1 C_l^ϕ(s^j_l-j) = C_l^s^l_0. * (k-1 ≤ l ≤ k+n-2) In this case l-n+1 ≤ j ≤ k-1 and no extended cylinder occurs. In order to have B_l ≠∅, all the relations in <ref> must be satisfied, and hence B_l = ⋂_j = l-n+1^k-1 C_l^s^j_l-j = C_l^s^k-1_l-k+1. Thus using <ref> on <ref>, if follows that ⋂_j=0^k-1^-j(C_-n, …, n-1^s^j_-n, …, s^j_n-1) = ⋂_l=-n^-1 C_l^s^0_l∩⋂_l=0^k-2 C_l^s^l_0∩⋂_l=k-1^n-1+k-1 C_l^s^k-1_l, that is, a generic element of _n^k is a cylinder of 𝒞_-n, …, n+k-2, and every such cylinder can be formed in this way because the symbols s^0_-n, …, s^0_0, …, s^k-1_0, …, s^k-1_n-1 can be chosen arbitrarily, so we conclude that _n^k = 𝒞_-n, …, n+k-2. It is now trivial to conclude the following last results. (, _n) = (𝒞_0). From <ref> it follows that (_n^k) = (𝒞_-n, …, n-1+k-1) = n (𝒞_-1) + (n+k-1) (𝒞_0), therefore (, _n) = lim_k →∞1/k(_n^k) = lim_k →∞1/k (n (𝒞_-1) + (n+k-1) (𝒞_0)) = (𝒞_0). () = (𝒞_0). The sequence of partitions _n = 𝒞_-n, …, n-1 (n ∈) is incresing relative to the refinement order: _0 ≼_1 ≼⋯≼_n ≼⋯. Besides that, the union of _n generates the σ-algebra of the space . Finally, the entropy of _n is finite, because the entropy of 𝒞_-1 and 𝒞_0 are finite. Therefore, by the Kolmogorov-Sinai theorem (<ref>), the measure entropy of the system is () = lim_n →∞(, _n). We thus have to calculate (, _n), which is, by definition, (, _n) := lim_k →∞1/k(_n^k), which shows we have to calculate (_n^k). This finally implies that () = lim_n →∞(, _n) = (𝒞_0). § FOLDING ENTROPY OF ZIP SHIFTS Let (, ) be a zip shift space. As a consequence of ϕ being surjective, we have that S^+≥S^-. When S^+ > S^-, the zip shift is not invertible and, for any given x ∈, the set (x) has more than one element. In the folowing discussion, we will need a way the refer to each element of (x), so, for each s ∈ϕ(x_-1), we define [ A possibly more descriptive, but longer, alternative notation is (x^-);sx^+. ] x̂(s) := (…, x_-2; s, x_0, …). We also denote x̂ := (x) = x̂(s)s ∈ϕ(x_-1) and, for each X ⊆, X̂ := x̂x ∈ X⊆(ϵ). From <ref>, the folding entropy of is given by ℱ() = (ϵ|(ϵ)) and, from <ref>, the conditional entropy of the atomic partition ϵ with respect to the dynamical pullback (ϵ) = x̂x ∈ can be calculated by (ϵ|(ϵ)) = ∫_x̂∈(ϵ)[_x̂](ϵ|_x̂) (x̂), in which (_x̂)_x̂∈(ϵ) is the disintegration of with respect to (ϵ) and is the quotient measure of (ϵ). So in order to calculate the folding entropy of the zip shift, we need to find the quotient measure and to disintegrate the measure with respect to the dynamical pullback (ϵ) of the atomic partition ϵ of (defined in <ref>). §.§ The quotient measure Let us denote the natural projection with respect to the partition (ϵ) by (ϵ). Let us first determine the quotient σ-algebra ℬ̂, which is the pushforward of the cylinders σ-algebra of by the natural projection . For every set X ⊆, (X̂) = (X). Besides that, the quotient σ-algebra ℬ̂ is generated by the projected cylinder sets Ĉ (C ∈ℬ is a cylinder). The first claim follows directly from (X̂) = ⋃X̂ = ⋃(x)x ∈ X = (X). Now that 𝒬⊆(ϵ). Since each element of (ϵ) is of the form x̂ for some x ∈, there exists a set X ⊆ such that 𝒬 = x̂x ∈ X = X̂. This implies that its inverse image by the projection is of the form (𝒬) = (X̂) = (X). This shows that ℬ̂ is generated by sets Ĉ such that (C) ∈ℬ is a cylinder, which means that C is also a cylinder. Since x̂ = (x), it may be confusing to understand the difference between the sets Ĉ and (C). To better understand the notation, it is worth noticing that, if x ∈ C, then x̂ = (x) ⊆(C); that is, for each s ∈ϕ∈(x_-1), we have x̂(s) ∈(C). This shows that the elements of the set x̂ (which is an element of Ĉ) do not belong to the set Ĉ, but instead to (C). To further avoid confusion, consider this example. Suppose x, y ∈, x̂ = {x̂(0), x̂(1)} and ŷ = {ŷ(0), ŷ(1)}. If C = {x, y}, then Ĉ = {x̂, ŷ} = {{x̂(0), x̂(1)}, {ŷ(0), ŷ(1)}}, while (C) = {x̂(0), x̂(1), ŷ(0), ŷ(1)}. In particular, it is worth noting that, for a cylinder C^s_i, (Ĉ^s_i) = (C^s_i) = C^s_i+1 i ≠ -1 ⋃_s' ∈ϕ(s) C^s'_0 i = -1. The quotient measure := on (ϵ) is the pushforward of by the natural projection (ϵ) of the dynamical pullback of the atomic partition. The next proposition shows how we can easily calculate it using the original measure . Let (, ) be a zip shift space. For every measurable set M ⊆, (M̂) = (M). Since (M̂) = (M) (<ref>) and is measure-preserving (<ref>), it follows that (M̂) = ((M̂)) = ((M)) = (M). §.§ Disintegration We wish to disintegrate the measure on with respect to the pullback partition (ϵ). In order to do that, we must find, for each x̂∈(ϵ), the conditional measures _x̂ on , in such a way that, for every measurable set M ∈ℬ, it holds that (M) = ∫_x̂∈(ϵ)_x̂(M) (x̂). To define the conditional measures on x̂, remember that x̂ = x̂(s)s ∈ϕ(x_-1) and that the conditional measure is supported on x̂, so, for each measurable set M ∈ℬ, it is given by _x̂(M) = _x̂(M ∩x̂). Thus, since x̂ is finite, we can define it on each atom {x̂(s)}. Based on the probability distribution p^+ on S^+, we have described how to induce a probability distribution p^- on S^- by taking the pushforward of p^+ by the transition function ϕ. Using the two measures p^+ on S^+ and p^- on S^-, we can define, for each s^- ∈ S^-, a new probability measure q^s^- on the inverse image set ϕ(s^-) by setting, for each s^+ ∈ϕ(s^-) q^s^-_s^+ := p^+_s^+/p^-_s^-. This is a probability measure because, for each s^- ∈ S^-, ∑_s^+ ∈ϕ(s^-) q^s^-_s^+ = ∑_s^+ ∈ϕ(s^-)p^+_s^+/p^-_s^- = ∑_s^+ ∈ϕ(s^-) p^+_s^+/p^-_s^- = 1. It is important to notice that, as a direct consequence of this definition, p^+ = (p^+_s^+)_s^+ ∈ S^+ = ((p^-_s^- q^s^-_s^+)_s^+ ∈ϕ(s^-))_s^- ∈ S^-. We use these measures q^s^- to define the conditional measures as follows, by identifying the set x̂ with the preimage ϕ(x_-1). Let (, ) be a zip shift space with measure given by the probability distribution p^+, and let x̂∈(ϵ). The conditional measure _x̂ on x̂ is the probability measure defined, for each s ∈ϕ(x_-1), by _x̂ ({x̂(s)}) := q^x_-1_s = p^+_s/p^-_x_-1. Now we show this is the disintegration of . Let (, ) be a zip shift space. The family {_x̂}_x̂∈(ϵ) is the disintegration of with respect to (ϵ). It suffices to show that, for each basic cylinder C^s_i, it holds that (C^s_i) = ∫_x̂∈(ϵ)_x̂(C^s_i ∩x̂) (x̂). First let us calculate the sets C^s_i ∩x̂. For any set C ⊆, it holds that x ∈ C if, and only if, x̂⊆(C). Because of this, we must consider the cases x̂∈(C^s_i) and x̂∉(C^s_i); or equivalently, x ∈(C^s_i) and x ∉(C^s_i). According to <ref>, the expression for (C^s_i) depends on the value for i, so we consider 2 scenarios: * (i ≠ 0) In this case, we have (C^s_i) = C^s_i-1, hence C^s_i ∩x̂ = x̂ x̂∈Ĉ^s_i-1 ∅ x̂∉Ĉ^s_i-1. Since _x̂(x̂) = 1 e _x̂(∅) = 0, it follows that (C^s_i) = (C^s_i-1) = (Ĉ^s_i-1) = ∫_x̂∈Ĉ^s_i-1 1 (x̂) + ∫_x̂∈(ϵ) ∖Ĉ^s_i-1 0 (x̂) = ∫_x̂∈Ĉ^s_i-1_x̂(x̂) (x̂) + ∫_x̂∈(ϵ) ∖Ĉ^s_i-1_x̂(∅) (x̂) = ∫_x̂∈(ϵ)_x̂(C^s_i ∩x̂) (x̂). * (i = 0) In this case, we have that C^s_0 ∩x̂ = {x̂(s)} x̂∈Ĉ^ϕ(s)_-1 ∅ x̂∉Ĉ^ϕ(s)_-1. Since _x̂({x̂(s)}) = q^x_-1_s and _x̂(∅) = 0 (and, for each x ∈ C^ϕ(s)_-1, it holds that x_-1 = ϕ(s)), it follows that (C^s_0) = q^ϕ(s)_s(C^ϕ(s)_-1) = q^ϕ(s)_s(Ĉ^ϕ(s)_-1) = ∫_x̂∈Ĉ^ϕ(s)_-1 q^x_-1_s(x̂) + ∫_x̂∈(ϵ) ∖Ĉ^ϕ(s)_-1 0 (x̂) = ∫_x̂∈Ĉ^ϕ(s)_-1_x̂(C^s_0 ∩x̂) (x̂) + ∫_x̂∈(ϵ) ∖Ĉ^ϕ(s)_-1_x̂(C^s_0 ∩x̂) (x̂) = ∫_x̂∈(ϵ)_x̂(C^s_0 ∩x̂) (x̂). §.§ Calculating the folding entropy We are finally ready to prove our main result on the folding entropy. Let (Σ_S,σ_ϕ) be a zip shift space with measure induced by the probability distribution p^+. Then ℱ(_ϕ) = ∑_s^- ∈ S^-([r]∑_s^+ ∈ϕ(s^-) -q^s^-_s^+log q^s^-_s^+) p^-_s^- = (𝒞_0) - (𝒞_-1). As discussed in the beginning of the section, it follows from <ref> and <ref> that the folding entropy of is given by ℱ() = ∫_x̂∈(ϵ)[_x̂](ϵ|_x̂) (x̂), in which = _(ϵ) is the quotient measure of (ϵ). Now notice that ϵ|_x̂ = {y}∩x̂{y}∈ϵ = {x̂(s^+)}s^+ ∈ϕ(x_-1), hence from <ref> and <ref> it follows that [_x̂](ϵ|_x̂) = ∑_s^+ ∈ϕ(x_-1) -_x̂ ({x̂(s^+)}) log_x̂ ({x̂(s^+)}) = ∑_s^+ ∈ϕ(x_-1) -q^x_-1_s^+log q^x_-1_s^+. This shows that this value depends only on x_-1, so it is constant on each set Ĉ^s^-_-1. The set 𝒞̂_-1 := Ĉ^s^-_-1s^- ∈ S^- is a partition of (ϵ), since (1) Ĉ^s^-_-1≠∅; (2) Ĉ^s^-_-1∩Ĉ^r^-_-1 = ∅ when s^- ≠ r^-; and (3) (ϵ) =⋃_s^- ∈ S^-Ĉ^s^-_-1. Besides that, it follows from <ref> and <ref> that (Ĉ^s^-_-1) = (C^s^-_-1) = p^-_s^-. Thus the folding entropy of is ℱ() = ∫_x̂∈(ϵ)[_x̂](ϵ|_x̂) (x̂) = ∑_s^- ∈ S^-∫_x̂∈Ĉ^s^-_-1[_x̂](ϵ|_x̂) (x̂) = ∑_s^- ∈ S^-( [r]∑_s^+ ∈ϕ(s^-) -q^s^-_s^+log q^s^-_s^+) (Ĉ^s^-_-1) = ∑_s^- ∈ S^-([r]∑_s^+ ∈ϕ(s^-) -q^s^-_s^+log q^s^-_s^+) p^-_s^-. This proves the first equality of <ref>. Noting that q^s^-_s^+ p^-_s^- = p^+_s^+ (<ref>) and p^-_s^- = ∑_s^+ ∈ϕ(s^-) p^+_s^+, it follows that ℱ() = ∑_s^- ∈ S^-[r]∑_s^+ ∈ϕ(s^-) -q^s^-_s^+p^-_s^-log q^s^-_s^+ = ∑_s^- ∈ S^-[r]∑_s^+ ∈ϕ(s^-) -p^+_s^+ (log p^+_s^+ - log p^-_s^-) = ∑_s^+ ∈ S^+ -p^+_s^+log p^+_s^+ - ∑_s^- ∈ S^- -( [r]∑_s^+ ∈ϕ(s^-) p^+_s^+) log p^-_s^- = ∑_s^+ ∈ S^+ -p^+_s^+log p^+_s^+ - ∑_s^- ∈ S^- -p^-_s^-log p^-_s^-. Finally, since (<ref>) (𝒞_0) = ∑_s^+ ∈ S^+ -p^+_s^+log p^+_s^+ and (𝒞_-1) = ∑_s^- ∈ S^- -p^-_s^-log p^-_s^-, we conclude that ℱ() = (𝒞_0) - (𝒞_-1). In particular, since the measure entropy is given by () = (𝒞_0), then () = ℱ() + (𝒞_-1). § ACKNOWLEDGEMENT N. M. was partially financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior Brasil (CAPES) - grant 88887.645688/2021-00. P. M. was partially financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior Brasil (CAPES) - grant 141401/2020-6. R.V. was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) (grants 313947/2020-1 and 314978/2023-2), and partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) (grants 17/06463-3 and 18/13481-0).
http://arxiv.org/abs/2407.02091v1
20240702092638
Efficient Bit Labeling in Factorization Machines with Annealing for Traveling Salesman Problem
[ "Shota Koshikawa", "Aruto Hosaka", "Tsuyoshi Yoshida" ]
cs.LG
[ "cs.LG", "quant-ph" ]
§ ABSTRACT To efficiently find an optimum parameter combination in a large-scale problem, it is a key to convert the parameters into available variables in actual machines. Specifically, quadratic unconstrained binary optimization problems are solved with the help of machine learning, e.g., factorization machines with annealing, which convert a raw parameter to binary variables. This work investigates the dependence of the convergence speed and the accuracy on binary labeling method, which can influence the cost function shape and thus the probability of being captured at a local minimum solution. By exemplifying traveling salesman problem, we propose and evaluate Gray labeling, which correlates the Hamming distance in binary labels with the traveling distance. Through numerical simulation of traveling salesman problem up to 15 cities at a limited number of iterations, the Gray labeling shows less local minima percentages and shorter traveling distances compared with natural labeling. Efficient Bit Labeling in Factorization Machines with Annealing for Traveling Salesman Problem Shota Koshikawa, Aruto Hosaka, and Tsuyoshi Yoshida Information Technology R&D Center, Mitsubishi Electric Corporation, Kanagawa 247-8501, Japan July 8, 2024 ======================================================================================================================================================= § INTRODUCTION Combinatorial optimization problems have gained significant attention across various domains, including logistics, transportation systems, and manufacturing <cit.>, due to their wide-range applications and potentials for cost reduction and efficiency improvement. The computational complexity of these problems is generally classified to NP hardness, resulting in substantially challenging to approach the optimal solution at a reasonable amount of computational resource <cit.>. Renowned for its computational complexity as an NP-hard problem, the traveling salesman problem (TSP) serves as a cornerstone in numerous fields, and being vigorously researched <cit.>. The complexity of such difficult problems can be relaxed by combining machine learning. Especially, factorization machines with annealing (FMA) <cit.> is a useful technique for black-box optimization <cit.>. FMA employs factorization machines (FM) <cit.> with binary variables as a surrogate model. Since the model takes the form of a quadratic unconstrained binary optimization (QUBO), Ising machines can be utilized to efficiently find a good solution for the model <cit.>. The performance of a QUBO solver depends on the labeling method, i.e., how the actual nonbinary variables are replaced by binary variables available in the solver. While the labeling method is a key to characterize how frequently the solver is captured at local solutions, there has been limited research on it <cit.>. It aims at creating a smoother energy landscape by assigning bit states with short Hamming distances to binary variable configurations close in the solution space. By ensuring that similar solutions are represented by bit states with short Hamming distances, we hypothesize that we can achieve more efficient optimization. According to the situation described above, this work originally contributes on QUBO formulation of TSP with reduced number of bits by employing FMA, proposal of Gray labeling useful for avoiding local solutions based on the idea of similar bits for similar routes, proposal of the metric for local solution characterization, and comparison of conventional natural labeling and Gray labeling. The remainder of this paper is structured as follows: based on the preliminaries of FMA and TSP in Sec. <ref>, Sec. <ref> explains bit labeling methods of natural and Gray labeling. Sec. <ref> then introduces a local solution metric for efficient characterization of QUBO problems. To validate our approach, Sec. <ref> performs numerical simulations of FMA-based TSP solvers with two labeling methods. Finally, Sec. <ref> concludes the paper. § PRELIMINARIES This section reviews fundamentals of FMA and TSP. §.§ Factorization machines with annealing Rendle proposed an FM model for high prediction performance with efficient high-order feature interactions. The prediction is given by the sum of the linear and the quadratic-order interaction terms <cit.>: y = w_0 +∑_𝗂=1^𝗇w_𝗂x_𝗂 + ∑_𝗂=1^𝗇∑_𝗃=𝗂+1^𝗇⟨𝐯_𝗂, 𝐯_𝗃⟩ x_𝗂 x_𝗃 . The input data is represented as a feature vector 𝐱 = (x_1, x_2, … , x_𝗇) of 𝗇 real-valued features, and y is an objective variable. w_0 is the global bias, w_𝗂 is the weight of the 𝗂-th feature, and a weight vector 𝐰 = (w_1, ⋯, w_𝗇). 𝐯_𝗂 is the 𝗄-dimensional latent vector of the 𝗂-th feature, and the vector sequence 𝐕 = (𝐯_1, ⋯, 𝐯_𝗇). The interaction between features x_𝗂 and x_𝗃 is approximated by the inner product ⟨𝐯_𝗂, 𝐯_𝗃⟩. The model parameters (w_0, 𝐰, 𝐕) are optimized to minimize the error between the predicted and actual values on the training data. Unlike support vector machines, FM use factorized parameters to model all variable interactions. In traditional polynomial models, it was necessary to prepare individual interaction parameters for each combination, such as w_𝗂𝗃x_𝗂x_𝗃. However, x_𝗂 x_𝗃 becomes mostly zero in sparse data, making it almost impossible to calculate w_𝗂𝗃. In contrast, FM represent the magnitude of the interaction of x_𝗂x_𝗃 as ⟨𝐯_𝗂, 𝐯_𝗃⟩, that is, no longer mutually independent of each w_𝗂𝗃. Therefore, even if one or both of the interaction components are zero, if there is a non-zero component of x_𝗂 or x_𝗃 somewhere, the parameters 𝐯_𝗂 and 𝐯_𝗃 can be learned. This implies that FM can indirectly learn interaction effects even from data without the target interaction components. Thus FM are robust in handling sparse data and have a relatively low computational cost <cit.>. This makes it useful for high-dimensional sparse data applications. FM can be combined with an optimization method of annealing <cit.>, where the combination is called FMA. The model equation of FM with binary variables can be rewritten in the QUBO form: y = w_0 + ∑_𝗂=1^𝗇∑_𝗃=𝗂^𝗇 Q_𝗂𝗃x_𝗂x_𝗃 , where Q = (Q_𝗂𝗃) is an 𝗇×𝗇 QUBO matrix, Q_𝗂𝗂= w_𝗂, Q_𝗂𝗃= ⟨𝐯_𝗂, 𝐯_𝗃⟩. Now we explain the optimization method for black-box optimization problems using FMA. The FMA approach comprises four main phases that are repeated in an iterative cycle <cit.>: * Training: The FM model is trained using the available training data. A solution candidate of the single bit sequence 𝐛 were randomly generated, and the pairs of 𝐛 and corresponding energy (objective variables) were added for the initial training. The parameters of the FM are optimized to minimize the mean-squared error between the predicted values and the actual energy values. * Sampling: New bit sequences are generated from the trained FM model, focusing on samples with low predicted energy values. Since the FM model is formulated as a QUBO, quantum or classical annealing techniques can be employed to find low-energy states, which correspond to good samples. * Conversion: The bit sequences generated in the sampling are converted back to the original optimization problem's parameters. This aspect will be detailed in Sec. <ref>. * Evaluation: The costs are simulated or experimented using parameters obtained at the previous iteration, and the pairs of the binarized parameters and the corresponding energy are used to update the training data. The FMA approach iterates through these four phases multiple times, gradually refining the approximation of the black-box function, in this case QUBO, and improving the quality of the solutions. After a given number of iterations, the best sample found during the optimization process is returned as the final solution. §.§ Traveling salesman problem TSP is one of the most widely studied combinatorial optimization problems <cit.>, which tries to find the shortest route that visits all predefined points exactly once and returns to the origin. This can be extended to various optimization problems, such as the component assembly sequence in manufacturing, delivery routes in logistics, and data transmission paths in telecommunication networks. Regarding the complexity of TSP, as increasing the number of cities N, the total number of possible routes grows exponentially and reaches (N-1)!, e.g., 8.7 × 10^10 routes for N=15. It is impractical to perform brute-force search under a case with large N. Various algorithms have been proposed to find the optimal solution for TSP, including well-known dynamic programming and branch-and-bound algorithms, reaching the exact solution <cit.>. One of those, Held-Karp algorithm <cit.> shows the time complexity of O(N^2 2^N). On the other hand, these algorithms are difficult to apply to a case with large N, thus often combined with an approximation method, e.g., greedy algorithm <cit.>, local search method <cit.>, genetic algorithm <cit.>, ant colony optimization <cit.>, and quantum/simulated annealing <cit.>. In this work, N=5–15 cities are placed in rectangular coordinates (α , β), where α and β (∈ [0,1]) are randomly obtained as shown in Tab. <ref>. Each city has a unique integer index i ∈{ 0, 1, …, N-1 }. The departure and destination city is indexed by 0. An arbitrary route is described as 𝐫 = (r_1, r_2, ⋯, r_N-1) except for the 0-th city. The objective is to minimize the distance: d(𝐫) = ∑_j=0^N-1√((α_r_j+1 - α_r_j)^2 + (β_r_j+1 - β_r_j)^2), where r_0=r_N=0 according to the definition. § BIT LABELING METHODS This work treats TSP with FMA, so any variables in TSP must be redescribed by binary variables only. This section explains labeling methods of converting the TSP route 𝐫 into the single bit sequence 𝐛. In a well-known labeling method, N^2 bits are employed to formulate N-city TSP, resulting in a quadratic Hamiltonian <cit.>. Recent works with improved labeling have reduced the number of bits to N logN <cit.>. In this manuscript, log denotes the logarithm in base 2. §.§ Bit labelings in channel coding Bit labelings are essential in channel coding for spectrally efficient and reliable communications. While the logical layer treats bits, the channel requires symbols, where bits to symbols mapping rule is provided to make bit errors caused by a symbol error as less as possible. It is then better to provide similar labels with a small Hamming distance to neighboring symbols having a small Euclidean distance. A well-known method is binary (reflected) Gray coding <cit.>, where 2^𝗆-ary pulse amplitudes are labeled with 𝗆 bits so that every Hamming distance between the nearest amplitudes is exactly 1. For example, amplitudes {3, 1, -1, -3} are labeled as {00, 01, 10, 11 } with natural coding and {00, 01, 11, 10 } with Gray coding. This work extends the established concept of Gray coding to our binary labeling method, which is expected to be a key to avoid local solutions in optimization problems. §.§ Forward labeling Let l_N(·) and l_G(·) denote the bit labeling function obtained by applying natural labeling and Gray labeling, respectively. The output of these by inputting the route 𝐫 provides the bit sequence 𝐛. Tab. <ref> shows an example for N=5, including the forward labeling 𝐫→𝐛 and the inverse labeling 𝐛→𝐫. Due to the definition, the bit sequence set is generally larger than that of the route set. Thus we employ 𝐛 for the bit sequence having one-to-one correspondence to 𝐫 (used in the forward labeling), and 𝐛 for arbitrary combination of bits (used in the inverse labeling). Natural labeling directly corresponds (N-1)! permutation cases in N-city TSP routes 𝐫 to nonnegative integers m ∈{ 0, 1, … , (N-1)!-1 }, where m is further described by the single bit sequence 𝐛 with a length of ℓ_N=⌈log (N-1)! ⌉ (=⌈∑_i=2^N-1logi⌉), following the straight binary manner. The 𝐛 is obtained by 𝐛=n_ℓ_N(m), where n_·(·) is the function obtaining a bit sequence having a length λ from an arbitrary nonnegative integer γ, i.e., n_λ(γ) = σ_0 ≤ k < λ (η_k(γ)). The η_k(γ) is the function to obtain the k-th bit from an arbitrary nonnegative integer γ with the straight binary, i.e., η_k(γ) = mod (⌊γ/2^k ⌋, 2), where mod(·,·) denotes the modulo function. The σ denotes the bit concatenation function from the most significant bit (the (k_0-1)-th bit) to the least significant bit (the 0-th bit), i.e., σ_0 ≤ k < k_0(b_k) = b_k_0 -1 b_k_1 -2… b_1 b_0 with an arbitrary nonnegative integer k_0. The permutations are arranged in the lexicographical order, e.g., l_N((1, 2, 3, 4))=00000, l_N((1, 2, 4, 3))=00001, l_N((1, 3, 2, 4))=00010, …, l_N((4,3,2,1))=10111 in the case of N=5. On the other hand, our proposal of Gray labeling combines the inversion number and Gray coding. The inversion number is the idea of discrete mathematics and relates to a kind of sort, the bubble sort, of sequences <cit.>. Gray labeling mainly consists of the following two steps: Step 1. For every i-th city (i = 2, 3, …, N-1), enumerate the number of inversion cities, having an index <i and visited after city i except for the 0-th city. Configure the inversion city set 𝒮_i and quantify the set size |𝒮_i|. Step 2. Convert each |𝒮_i| to component bit sequence with the length ⌈log i ⌉ by the Gray coding function g_i(|𝒮_i|). Concatenate the component single bit sequence with the order from i=2 to N-1 to the single bit sequence having a length ℓ_G = ∑_i=2^N-1⌈log i ⌉. This labeling method is explained with a small example; the city route 𝐫=(7, 5, 3, 6, 8, 1, 4, 2) for N=9 shown in Tab. <ref>. Step 1 enumerates the inversion cities. For examples, there are 4 smaller numbers (3, 1, 4, 2) after 5, thus 𝒮_5={1, 2, 3, 4} and |𝒮_5|=4, and no smaller numbers after 2, thus 𝒮_2=∅ and |𝒮_2|=0, where ∅ denotes the empty set. Enumerating every inversion number for i=2 to N-1 with the same manner, |𝒮| = (|𝒮_2|, |𝒮_3|, …, |𝒮_8|) = (0, 2, 1, 4, 3, 6, 3) is obtained. Step 2 converts |𝒮_i| to the component bit sequence by the Gray coding function g_i(|𝒮_i|) = n_λ(|𝒮_i|) ⊕ n_λ(⌊ |𝒮_i| / 2 ⌋), where λ = ⌈log i ⌉. ⊕ denotes the operator of bitwise exclusive OR. According to this definition, g_2(|𝒮_2|) → 0, g_3(|𝒮_3|) → 11, …, g_8(|𝒮_8|) → 010, where each length is bare minimum. Note that i=1 is ignored because 1 has no inversion number. Finally, every obtained sequence for i is concatenated from i=2 to N-1=8 into the single bit sequence 𝐛= 01101110010101010 with the length ℓ_G = ∑_i=2^8⌈log i ⌉ = 17. The conversion from 𝐫→𝐛 is injective but not surgective due to redundant description with binary variables. §.§ Inverse labeling Let the inverse labeling function of l_N(𝐫), l_G(𝐫) be l_N^-1(𝐛), l_G^-1(𝐛), to a given single bit sequence. When we employ annealing machines to optimize 𝐛, the obtained combination of binary variables can be arbitrary, i.e., there are totally 2^ℓ cases with ℓ bits. The conversion from 𝐛→𝐫 is surjective but not injective in general because possible cases with the concatenated single bit sequence can be more than the possible (N-1)! routes. Thus we have to define the inverse function 𝐛→𝐫 to be injective. We consider 𝐛, its integer representation based on the straight binary manner m=n_ℓ_N^-1(𝐛), and let m = mod (m, (N-1)!) in natural labeling. Since m < (N-1)!, there exists a route 𝐫=l_N^-1(𝐛), where 𝐛=n_ℓ_N(m). In Gray labeling, the inverse operation recovers the route 𝐫=g_i^-1(|𝒮_i|), where |𝒮_i| = mod (|𝒮_i|, i). An example is again referred to Tab. <ref>, e.g., 𝐛=11011 corresponds to m=27. In natural labeling, m=mod (27, (5-1)!) = 3, and l_N^-1(11011) = l_N^-1(00011) = (1, 3, 4, 2). In Gray labeling, |𝒮| = (|𝒮_2|, |𝒮_3|, |𝒮_4|) = (1, 3, 2), and |𝒮| = (|𝒮_2|, |𝒮_3|, |𝒮_4|) = (1, 0, 2). Therefore, l_G^-1(11011)= l_G^-1(10011) = (2, 4, 1, 3). The bit length for Gray labeling ℓ_G=∑_i=2^N-1⌈log i ⌉ is greater than or equal to that for natural labeling, ℓ_N= ⌈log (N-1)! ⌉ (=⌈∑_i=2^N-1logi⌉). These lengths ℓ_N and ℓ_G are approximated to O(log(N!)) = O(NlogN). The proposed method of combining the inversion number and Gray coding is originated from the idea: similar bits for similar routes. A pair of similar routes just in the relationship of swapping two cities consecutively visited, the Hamming distance between their bit sequence equals exactly 1 for the proposed Gray labeling. Let r_j and r_j+1 be the indices of a pair of cities consecutively visited. In Gray labeling, |𝒮_r_j+1| under r_j < r_j+1 is smaller by 1 than |𝒮_r_j+1| under r_j > r_j+1, and the other |𝒮_i| maintains. When the resultant bit sequence pair obtained from a difference in |𝒮_r_j+1|, the Hamming distance between those is guaranteed to be 1 with Gray coding and not guaranteed with natural coding. Tab. <ref> shows an example of similar routes (a) 𝐫=(7, 5, 3, 6, 8, 1, 4, 2) and (b) 𝐫=(5, 7, 3, 6, 8, 1, 4, 2). In this case, only |𝒮_7| is different from each other and the other |𝒮_i| are identical, and the Hamming distance between their concatenated bit sequence is exactly 1. § LOCAL SOLUTION METRIC AND ANALYSIS Performance of an optimization solver is characterized by the balance of the solution quality and the required computational resource, which can be translated into and the Ising energy and the number of iterations for a solver based on an annealing machine. Our proposed Gray labeling in the previous section would be useful, especially for avoiding local solutions. This section introduces the local solution metric to quantify the expected performance without running actual optimization procedure. Our local solution metric is given by the number of local solutions normalized by the number of all solution candidates, which will be explained later. A solution is defined as a local solution if all of the nearest solutions (with the Hamming distance of 1 from the solution under examination) have worse or equal solution quality. Instead of d(𝐫), let d(𝐛) simply denote the total traveling distance in each route 𝐫 = l_N^-1(𝐛) for natural labeling or 𝐫 = l_G^-1(𝐛) for Gray labeling according to Eq. (<ref>). The local solution flag is defined as f(𝐛) = ∏_k=0^ℓ-1δ (d(𝐛) ≤ d(𝐛⊕ 2^k)) for the single bit sequence 𝐛 and its length ℓ (ℓ_N for natural and ℓ_G for Gray labeling, respectively), where δ (·) is 1 if the argument is true and 0 otherwise. The ⊕ 2^k flips the k-th bit only, to obtain similar single bit sequence apart by the Hamming distance of 1 from 𝐛. An example to compute f for N=5 is explained below. When we treat 𝐛=00110, the corresponding route 𝐫 is (2, 1, 3, 4) in natural labeling, and (4, 1, 3, 2) in Gray labeling, respectively. The set of 𝐛 ^' =𝐛⊕ 2^k is {00111, 00100, 00010, 01110, 10110}, and the set of 𝐫 is thus {(2, 1, 4, 3), (1, 4, 2, 3), (1, 3, 2, 4), (3, 2, 1, 4), (4, 3, 1, 2)} given by l_N^-1(𝐛^') for natural labeling and {(1, 4, 3, 2), (1, 3, 2, 4), (4, 1, 2, 3), (4, 3, 1, 2), (4, 2, 3, 1)} given by l_G^-1(𝐛^') for Gray labeling, respectively. An arbitrary similar route 𝐫^' with the reference route 𝐫 is given by swapping a pair of cities consecutively visited. Here in Gray labeling, any bit sequence from 𝐫^' is described by either one of 𝐛^', corresponding to the Hamming distance between 𝐛 and 𝐛^' equals exactly 1. This feature is unique to Gray labeling. Based on f, the local solution metric p is given by p = 𝔼_𝐛[ f(𝐛) ] , where 𝔼[ ·] denotes the expectation. Fig. <ref> shows the metric p in each labeling for N=5 to 15. There are too many cases to quantify full cases for an N ≥ 11, so we sampled at maximum 10^5 cases randomly. The metric p decreases as increasing N, where Gray labeling shows more rapid decrease than natural labeling. This feature would be advantageous in better convergence in optimization because of avoiding local solutions when exploring ones through bit flips with an annealing machine. Note that, under the condition of a small number of cities, swapping the cities consecutively visited results in a significant change in the route and the distance, e.g., the number of local solutions is 5 for natural labeling and 6 for Gray labeling for N=5, in all of the 2^5 cases. § NUMERICAL SIMULATIONS This section numerically compares natural labeling and Gray labeling in terms of the obtained solution quality and the convergence speed with FMA. As shown in Section <ref>, a solution candidates of the single bit sequence 𝐛 were randomly generated and used in bits 𝐱 for the initial training. After the training, an acquisition function y was constructed with the FM. Subsequently, the bit sequence 𝐛 minimizing the acquisition function y was estimated using an annealing machine, and the route-distance pair was added to the training data. The number of data points for the initial training and that for the solution search are denoted as N_i and N_s, respectively. These parameters were set to (N, N_i, N_s) = (5, 15, 45), (7, 100, 300), (9, 300, 900), (11, 1000, 3000), (13, 1000, 3000), (15, 1000, 3000). The comparison results between the two labeling methods are shown in Fig. <ref>. Here, d_opt and d_min indicate the globally optimum distance and the minimum distance obtained until the step, respectively. Gray labeling shows mostly smaller d_min or faster convergence than natural labeling in any optimization steps for every N. Especially, Gray labeling reached the global optimal solutions for N=5, 7, 11, while natural labeling did not. For the trial of N=15, natural labeling and Gray labeling show almost the same balance of the solution quality and the convergence speed. Fig. <ref> shows the finally obtained routes by (a) natural and (b) Gray labeling at the final optimization step in our trial, and (c) the globally optimal route for N = 13. The corresponding distances d were 4.48, 3.34, and 3.23, respectively. Compared with the optimal route, natural labeling and Gray labeling yielded longer routes by 39% and 3%, respectively. Overall, Gray labeling is expected to avoid local solutions more frequently than natural labeling, resulting in the better quality–speed balance, as predicted from the local solution metric according to the previous section. § CONCLUSION This work addresses the local solution characterization and its avoidance by bit labeling method in FMA, a QUBO solver combined with machine learning. Especially we focused on TSP, where FMA could reduce the required number of bits from N^2 to NlogN for N-city TSP. Within the context of the FMA-based TSP, two labeling method of natural and Gray labeling were compared. While natural labeling converted (N-1)! routes to the lexicographical integer and straight-binary label, Gray labeling employed inversion number and Gray coding to realize the idea of similar bits for similar routes with the help of slightly larger number of bits. The originally introduced metric simply quantified the local solution ratio without performing actual optimization, where Gray labeling showed rapid reduction of the ratio compared with natural labeling as increasing N. Through the actual numerical optimization, Gray labeling showed often better balance of the solution quality and the convergence speed because of the feature of fewer probability of being captured at local solutions. Our results suggest that both the proposed Gray labeling and the proposed metric are useful for QUBO solvers combined with machine learning such as FMA. The authors thank Mr. Koichi Yanagisawa, Mr. Isamu Kudo, and Dr. Narumitsu Ikeda of Mitsubishi Electric Corp. for the fruitiful discussion. apsrev
http://arxiv.org/abs/2407.02009v1
20240702072859
Consistency and stability of boundary conditions for a two-velocities lattice Boltzmann scheme
[ "Thomas Bellotti" ]
math.NA
[ "math.NA", "cs.NA" ]
Generalized foliations for homeomorphisms isotopic to a pseudo-Anosov homeomorphism: a geometric realization of a result by Fathi Emmanuel Militon July 8, 2024 ================================================================================================================================= § ABSTRACT We explore theoretical aspects of boundary conditions for lattice Boltzmann methods, focusing on a toy two-velocities scheme. By mapping lattice Boltzmann schemes to Finite Difference schemes, we facilitate rigorous consistency and stability analyses. We develop kinetic boundary conditions for inflows and outflows, highlighting the trade-off between accuracy and stability, which we successfully overcome. Stability is assessed using GKS (Gustafsson, Kreiss, and Sundström) analysis and—when this approach fails on coarse meshes—spectral and pseudo-spectral analyses of the scheme's matrix that explain effects germane to low resolutions. 65M06, 65M12, 65N12 § INTRODUCTION So far, like many other topics revolving around schemes, the study of boundary conditions has been strongly polarized towards applications, focusing on multidimensional problems—sometimes with curved boundaries <cit.>—and quite elaborated schemes. For we believe that it is now time to understand things from a more theoretical point of view, we focus on the simplest method available: a two-velocities scheme. On this numerical method, we would like to assess consistency and stability of boundary conditions. We found that the answer to these two questions can pass through corresponding schemes on the variable of interest, turning unknowns of the scheme into time-steps for a scheme. In our context, the scheme features two unknowns and , where only ≈, with the solution of the partial differential equation, and is a merely numerical unknown. The scheme acts as (^, ^) (^ + 1, ^ + 1), whereas its corresponding scheme proceeds as (^-1, ^) ^ + 1. The latter schemes are called “corresponding” because they produce—upon taking the right initialization into account—the same dynamics on the variable of interest . Our previous works <cit.> have shown a systematic way of turning schemes into ones in the presence of unbounded domains or bounded domains supplemented with periodic boundary conditions. This secures a rigorous framework to perform numerical analysis on methods. When non-trivial boundary conditions are enforced, we still lack this systematic path towards methods, due to the loss of space invariance stemming from the boundaries. The development of such a transformation in a general setting shall not be the scope of the present paper, which will, however, show that—whenever they can be explicitly constructed by computations— counterparts are indeed a powerful tool to gauge boundary conditions enforced at the level. Conversely, the main points of the present work are the following. * We develop kinetic boundary conditions to handle inflows and—more importantly—outflows. The word “kinetic” must be understood as “without modifying the algorithm, i.e. enforced during the transport phase using distribution functions entering the domain”. * We show that a compromise exists between accuracy and stability. Still, we propose corrections based on boundary source terms to recover the needed accuracy while retaining stability. * We study stability * first, through the GKS (Gustafsson, Kreiss, and Sundström) analysis <cit.>. However, it can lose its predictive power when the mesh is coarse, and stable boundary conditions can turn up to be unstable and viceversa. * In these cases, we directly analyze the spectrum of the matrix associated with the scheme <cit.>. Furthermore, we link the order of poles in the reflection coefficient <cit.>—which is a GKS notion—with the number of eigenvalues of the matrix associated with the scheme tending to isolated points when the number of degrees of freedom increases. We also perform analyses by plotting some pseudo-spectra <cit.>. We focus on the approximation of = (, ) ∈^, the solution of the 1D system of ∈ conservation laws ∂_(, ) + ∂_(((, ))) = 0 ∈ (0, ], ∈ (0, ), (= 0, ) = ^() ∈ (0, ), suitable boundary conditions on {0, } ∈ (0, ]. Here, > 0 is the final time, > 0 the domain length, : ^→^ a flux of class 1, and ^ : (0, ) →^ the initial datum. Most of the time, we will concentrate on the scalar case, thus = 1, with linear flux () =. Unless otherwise said, we employ < 0. The problem under consideration thus reads ∂_(, ) + ∂_(, ) = 0 ∈ (0, ], ∈ (0, ), (= 0, ) = ^() ∈ (0, ), (, = ) = () ∈ (0, ], where : (0, ] → defines the trace of the solution on the inflow point =. As = 0 is an outflow, no “physical” boundary conditions needs to be enforced here. To “give a flavour” of the kind of results proved in the paper, the compromise between accuracy and stability for a scalar linear problem with relaxation rate = 2 can be informally stated as follows. Consider a two-velocities scheme with relaxation parameter = 2, thus second-order accurate under periodic boundary conditions, tackling (<ref>), (<ref>), and (<ref>) with <0. At the boundary, utilize a second-order anti-bounce-back condition on the inflow and an extrapolation of order ∈ on the missing distribution function on the outflow. Initialize all data at equilibrium. Then Consistency Stability Order of converg. (L^2) <Ref> and <ref> <Ref> 4*= 1 Trunc. error initially at the outflow: 4*GKS-stable 4*^3/2 Trunc. error initially in the bulk: ^2 Trunc. error eventually in time at the outflow: ^2 Trunc. error eventually in time in the bulk: ^3 4*≥ 2 Trunc. error initially at the outflow: ^2 4*GKS-unstable 4*^2 Trunc. error initially in the bulk: ^2 Trunc. error eventually in time at the outflow: ^2 Trunc. error eventually in time in the bulk: ^3 We also informally state <Ref>, which links the order of the pole of the reflection coefficient to the number of eigenvalues tending to isolated points. Let ∈∖{0} be an isolated point in the limit spectrum of the corresponding scheme matrix as its dimension tends to infinity. Then—under suitable stability and technical assumptions—the number of eigenvalues of the scheme matrix, counted with their multiplicity, tending towards , equals the order of the pole of the reflection coefficient of the outflow boundary condition at . Let us sketch the plan of the paper. The numerical scheme and numerical boundary conditions are presented in <Ref>. The scheme is then turned into its corresponding scheme in <Ref>. This allows to rigorously study consistency, as presented in <Ref>, and stability, <Ref>. Conclusions and perspectives are proposed in <Ref>. § TWO-VELOCITIES SCHEME This section describes the scheme that we analyze in the paper. We start <Ref> by introducing the space-time discretization. <Ref> treats the scheme used in the bulk of the domain, whereas <Ref> present strategies adopted on the boundary points. Finally, <Ref> is devoted to the initalisation of the numerical scheme. §.§ Space and time discretization The discretization of the spatial domain (0, ) is performed with ∈ grid-points, given by , with ∈0 - 1 and /( - 1). Observe that we include the boundary points = 0 and in the computational domain, respectively via 0 = 0 and - 1 =. Moreover, the definition of works for every ∈, and thus allows to define ghost points outside [0, ]. As far as the time discretization is concerned, we utilize a discrete grid whose points are with ∈. The time step is linked to the space step by / with > 0 homogeneous to a velocity and thus called “lattice velocity”.[In the setting—see <cit.>, <cit.>, and <cit.>—one often uses =. However, we decided to keep the notations more familiar to the community.] This scaling between space and time discretization, known as “acoustic scaling”, is relevant in this context where information—see (<ref>)—travels at finite speed and the numerical method is time-explicit. For the latter reason, restrictions, known as CFL (Courant-Friedrichs-Lewy) conditions, will be needed on to keep the scheme stable. This whole setting is depicted in <Ref>. §.§ Bulk scheme Inside the domain, we utilize the so-called D_1Q_2^ scheme <cit.>, based on two distribution functions ^+∈^ and ^-∈^, each associated with a positively and negatively-moving unknown. At each stage of the algorithm, we define ^+ + ^-, and the algorithm reads: Collision _^±, = (1-) _^±, + ( 12_^±12(_^)^^±, (_^) ), ∈0 - 1.C Transport _^+, + 1 = _- 1^+, , ∈1 - 1, _^-, + 1 = _+ 1^-, , ∈0 - 2. T The relaxation parameter ∈ (0, 2]. We observe that is conserved throughout the collision (<ref>), for _^ = _^+, + _^-, = _^. By the way, we have to interpret _^ as an approximation of (, ), when the latter is smooth enough. This approximation is shown <cit.> to be—on unbounded domains or bounded domains with periodic boundary conditions—first-order accurate in when ∈ (0, 2) and second-order accurate for = 2. §.§ Boundary schemes Looking at the transport phase (<ref>), we see that the scheme is not yet defined on the boundary grid-points of index = 0 (for ^+) and = - 1 (for ^-). As stated in the introduction, we do not change the numerical schemes at the boundary, which boils down to consider “prepared” values for _-1^+, and _^-, laying in ghost cells before transport, <Ref>. For the sake of presentation, we consider the scalar case = 1 and assume that we need to set an inflow condition at = and no boundary condition (outflow) at = 0. §.§.§ Inflow boundary condition The inflow boundary condition is a Dirichlet condition in the spirit of (<ref>). Yet, we cannot enforce this condition directly on , but—according to out policy—we have to make good use of the ghost value _^-, to achieve the desired result. We follow the approach introduced in <cit.> and consider the boundary condition I-BC_ - 1^-, + 1 = _^-, = - _ - 2^+, + ( + 1). The aim of (<ref>) being to preserve overall second-order consistency when = 2, it is different from the standard “anti-bounce-back” rule analyzed in <cit.>, which would read _ - 1^-, + 1 = _^-, = - _ - 1^+, + () and be only first-order accurate. Two essential ideas make (<ref>) suitable in our case: * Information travels two cells at once, since the ghost point is populated using values defined at - 2. * The Dirichlet boundary condition is enforced “in the future”, thus using ( + 1) instead of the time-marching approach, which would rather employ (). §.§.§ Outflow boundary conditions On the outflow, no physical boundary condition has to be enforced. However, lattice Boltzmann schemes call for numerical boundary conditions, which boils down to assign the ghost value _-1^+,. The aim is twofold. On the one hand, the boundary condition should allow information to quit the domain. On the other hand, it ought not produce spurious travelling waves counter-propagating against physical ones that can lower the order of the method or—even worse—foster instabilities <cit.>. The rationale to build boundary conditions is making the scheme behave as much as no boundary were present. We therefore extrapolate the missing ghost value _-1^+, from those inside the computational domain following <cit.>, and employ _0^+, + 1 = _-1^+, = ∑_ = 0^ - 1__^+, + _0^ + 1 with _ (-1)^ + 1. In (<ref>), ∈ is the order of the extrapolation, and _0^ + 1 is a source term—that we shall further specify—depending only on ^, ^-1, …. Again, observe that—contrarily to <cit.> and respecting our policy—extrapolations are not enforced on the conserved moment . The low-order extrapolations read in this case: _0^+, + 1 = _-1^+, = _0^+, + _0^ + 11, _0^+, + 1 = _-1^+, = 2_0^+, - _1^+, + _0^ + 1, 2 _0^+, + 1 = _-1^+, = 3_0^+, - 3_1^+, + _2^+, + _0^ + 1. 3 Condition (<ref>) looks similar but is essentially different from the ones proposed by <cit.>. Another boundary condition that we propose for the outflow boundary relies on the scheme written on the conserved moment and the non-conserved moment (^+ - ^-). This latter unknown has equilibrium value ^() = (^+, ()- ^-, ()) = (). We enforce—at the future time + 1—a Neumann boundary condition on using a first-order extrapolation, and set at its equilibrium value. This reads _0^ + 1 = _1^ + 1 = _0^+, + _2^-, , _0^ + 1 = ^ (_0^ + 1) = ^ (_1^ + 1) = ^ (_0^+, + _2^-, ). These equations are employed to devise a value for _-1^+,, and will not change those of the distribution functions inside the computational domain: we have—rewriting on the positive distribution function and adding a source term _0^+, + 1 = _-1^+, = 12(_0^+, + _2^-, ) + 12^ (_0^+, + _2^-, ) + _0^ + 1.O-BC-F §.§ Initialization Finally, to wholly define the numerical scheme, we have to link the initial distribution functions _^+, 0 and _^-, 0 to the initial datum of the Cauchy problem (<ref>) given by ^. Initializing at equilibrium is the longstanding “no-brainer” choice for methods. In our case, using a point-wise discretization of the initial datum, this reads IE_^±, 0 = 12^() ±12 (^()), ∈0 - 1. Even though this choice has been proved compatible with second-order consistency in the bulk, <cit.>, which can be roughly understood by the fact that the equilibrium is an eigen-state of the collision phase (<ref>), we shall see that it can prevent second-order accuracy in conjunction with boundary conditions such as (<ref>) and (<ref>). § CORRESPONDING SCHEME The context being now set, we transform the scheme into a corresponding scheme solely on . From now on, we will be interested in the scalar case = 1, even though some of the results easily extend to > 1. The link between the discretized initial datum and the continuous one will always be _^0 = () for ∈0 - 1. §.§ Bulk and inflow Consider the scheme given by (<ref>), (<ref>), (<ref>), the inflow boundary condition (<ref>), and the outflow boundary condition (<ref>). Then, its corresponding scheme on reads _^1 = 12(_ - 1^0 + _ + 1^0) + 12 ((_ - 1^0) - (_ + 1^0)), _ - 1^1 = (1), _^ + 1 = 2-2(_ - 1^ + _ + 1^) + ( - 1) _^ - 1 + 2 ((_ - 1^) - (_ + 1^)), _ - 1^ + 1 = ( + 1), ∈, with ∈1 - 2. The proof of <Ref> is provided in <Ref>. Assumptions on the outflow boundary conditions appear in <Ref>. Moreover, the claim does not hold for (<ref>). Both are due to the fact that the choice of outflow boundary condition can impact the numerical scheme in the bulk, that is, for ∈1 - 2. We have the following peculiar numerical schemes. * The bulk scheme at the first time step (<ref>) is a Lax-Friedrichs scheme for (<ref>). This comes from the specific choice of the initial datum at equilibrium, (<ref>). * The bulk scheme eventually in time (<ref>) is a θ-scheme with * θ = when ∈ [0, 1], between a leap-frog scheme for the two-way linear wave equation ∂_ - ^2∂_ = 0 (with θ = 0) and a Lax-Friedrichs scheme for (<ref>) (with θ = 1). * θ = - 1 when ∈ [1, 2], between a Lax-Friedrichs scheme for (<ref>) (with θ = 0) and a leap-frog scheme for (<ref>) (with θ = 1). * The boundary scheme at - 1 at any time is exact for the inflow boundary condition (<ref>). §.§ Outflow We first provide the corresponding scheme for (<ref>), where the cases = 1 and ≥ 2 need to be distinguished. Consider the scheme given by (<ref>), (<ref>), (<ref>), and the outflow boundary condition (<ref>). Then, its corresponding scheme on reads as follows. For the initial time-step: _0^1 = 12 ( ∑_ = 0 ≠ 1^ - 1__^0 + (_1 + 1) _1^0 ) + 12 ( ∑_ = 0 ≠ 1^ - 1_(_^0) + (_1 - 1) (_1^0) ) + _0^ 1. Eventually in time, for ∈. * For = 1, we have: _0^ + 1 = 2_0^ + 12(2-) _1^ + 2 ((_0^) - (_1^)) + _0^ + 1 + (1-) _0^, * For ≥ 2, we have: _0^ +1 = 12 ( ∑_ = 0 ≠ 1^ - 1 (_ + (1-)_)_^ + (_1 + 1 + (1-)_1) _1^ ) + - 12 ((_0 + _0 ) _1^ - 1 + (_1 + _1 - 1) _2^ - 1 + ∑_ = 3^ (_ - 1 + _ - 1) _^ - 1 ) + 2 ( ∑_ = 0 ≠ 1^ - 1_(_^) + (_1 - 1) (_1^) ) + _0^ + 1 + 1-2 (_0 -_0) _0^, where _0 = -21, _1 = 1 - - 32, _ = (-1)^--2 + 1, ∈2⌊-12⌋ - 1, _ - = (-1)^ + 1 - - - 1, ∈1⌊-12⌋ + 1 + 1 + (-1)^2, with np = (n+p)!(n-p+1)p!(n+1)! the entries of the Catalan's triangle, see <cit.>. Let us consider schemes issued from <Ref> for small . * For = 1, we have: _0^ + 1 = 2_0^ + 2-2_1^ + 2 ((_0^) - (_1^)) + _0^ + 1 + (1-) _0^, ∈. Except for the source term, this boundary scheme has already been introduced in the literature whenever: * = 0, coinciding with <cit.> and <cit.>. * = 2, coinciding with <cit.> and <cit.>. * For = 2, we have: _0^ + 1 = _0^ + (1-) _1^ + ( - 1) _1^ - 1 + ((_0^) - (_1^)) + _0^ + 1 + (1-) _0^, ∈. Whenever = 1,this becomes <cit.> and <cit.>. * For = 3, we have: _0^ + 1 = (2-2)_0^ -_1^ + 2_2^ + 2( - 1)_1^ - 1 + (1-) _2^ - 1 + (32(_0^) - 2(_1^) + 12(_2^)) + _0^ + 1 + (1-) _0^, ∈. From (<ref>), we obtain _0^+, +1 = ∑_ = 0^ - 1__^+, + _0^ + 1 = ∑_ = 0^ - 1_ ( 12_^ + 1-2_^ + 2(_^) )+ _0^ + 1, _0^-, +1 = 12_1^ - 1-2_1^ - 2(_1^). Taking (<ref>) + (<ref>) and (<ref>) - (<ref>) provides: _0^ +1 = 12 ( ∑_ = 0 ≠ 1^ - 1__^ + (_1 + 1) _1^ ) + 1-2 ( ∑_ = 0 ≠ 1^ - 1__^ + (_1 - 1) _1^ ) + 2 ( ∑_ = 0 ≠ 1^ - 1_(_^) + (_1 - 1) (_1^) ) + _0^ + 1, _0^ +1 = 2 ( ∑_ = 0 ≠ 1^ - 1__^ + (_1 - 1) _1^ ) + 1-2 ( ∑_ = 0 ≠ 1^ - 1__^ + (_1 + 1) _1^ ) + 2 ( ∑_ = 0 ≠ 1^ - 1_(_^) + (_1 + 1) (_1^) ) + _0^ + 1, where in the case = 1, we consider _1 = 0. Let us first show that the scheme for = 1 is indeed given by (<ref>) from <Ref>. When considering (<ref>) for = 1, we have to estimate the difference _0^ - _2^, thus we obtain _0^ - _2^ = 2 ( ∑_ = 0 ≠ 1^σ - 1__^-1 + (_1 - 2) _1^-1 + _3^-1 ) + 1-2 ( ∑_ = 0 ≠ 1^σ - 1__^-1 + _1 _1^-1 - _3^ - 1 ) + 2 ( ∑_ = 0 ≠ 1^σ - 1_(_^-1) + _1 (_1^-1) - (_3^ - 1) ) + _0^. In order to get rid of the terms in ^ - 1 on the right-hand side of this expression, we consider _0^ + _2^ = 12 ( ∑_ = 0 ≠ 1^ - 1__^ - 1 + (_1 + 2) _1^-1 + _3^-1 ) + 1-2 ( ∑_ = 0 ≠ 1^ - 1__^-1 + _1 _1^-1 - _3^-1 ) + 2 ( ∑_ = 0 ≠ 1^ - 1_(_^-1) + _1 (_1^-1) - (_3^-1) ) + _0^. From this, we deduce that _0^ - _2^ = -2_1^ - 1 + (_0^ + _2^) thus the claim. We now focus on the first cell, indexed by = 0. The expression involving the non-conserved moment on the right-hand side of (<ref>) that we have to estimate reads ∑_ = 0 ≠ 1^ - 1__^ + (_1 - 1) _1^ = 2 ( (_0^2 + _1 - 1) _0^ - 1 + (_0 (_1-1) + _2) _1^ - 1 + (_0 _2 - _1 + 1 + _3) _2^ - 1 + ∑_ = 3^ (_0 _ + _ + 1 - _ - 1) _^ - 1 ) + 1-2 ( (_0^2 + _1 - 1) _0^ - 1 + (_0 (_1+1) + _2) _1^ - 1 + (_0 _2 + _1 - 1 + _3) _2^ - 1 + ∑_ = 3^ (_0 _ + _ + 1 + _ - 1) _^ - 1 ) + 2 ( (_0^2 + _1 - 1) (_0^ - 1) + (_0 (_1+1) + _2) (_1^ - 1) + (_0 _2 + _1 - 1 + _3) (_2^ - 1) + ∑_ = 3^ (_0 _ + _ + 1 + _ - 1) (_^ - 1) ) + _0 _0^, where we used (<ref>) and (<ref>) at the previous time step. In (<ref>), one has to set _ = 0 for ≥. With this in mind, we try to get rid of the expression in ^ - 1 on the right-hand side of (<ref>) using the equations on ^. We introduce max(1, - 1) + 1, the coefficients _0, …_ - 1∈, and write ∑_ = 0^ - 1__^ = 2 ( (_0 _0 + _1 )_0^ - 1 + ((_1 + 1)_0 + _2 ) _1^ - 1 + (_2 _0 + _3 + _1) _2^ - 1 + ∑_ = 3^ (__0 + _ + 1 + _ - 1) _^ - 1 ) +1-2 ( (_0 _0 + _1 )_0^ - 1 + ((_1 - 1)_0 + _2 ) _1^ - 1 + (_2 _0 + _3 - _1) _2^ - 1 + ∑_ = 3^ (__0 + _ + 1 - _ - 1) _^ - 1 ) +2 ( (_0 _0 + _1 )(_0^ - 1) + ((_1 - 1)_0 + _2 ) (_1^ - 1) + (_2 _0 + _3 - _1) (_2^ - 1) + ∑_ = 3^ (__0 + _ + 1 - _ - 1) (_^ - 1) ) + _0 _0^. using (<ref>) and (<ref>) at the previous time step. As before, one can keep notations general by assuming that _ = 0 for ≥. We thus have to solve an over-determined linear system with + 1 equations on unknowns _0, …, _ - 1, which reads _0 _0 + _1 = _0^2 + _1 - 1, (_1 - 1)_0 + _2 = _0 (_1+1) + _2, _2 _0 + _3 - _1 = _0 _2 + _1 - 1 + _3, __0 + _ + 1 - _ - 1 = _0 _ + _ + 1 + _ - 1, ∈3, where the fact that _ = 0 for ≥ and _ = 0 for ≥ is understood. We now show that (<ref>) admits a unique solution. Summing all the equations in (<ref>), the left-hand side becomes zero, since ∑_ = 0^ - 1_ = 1. Concerning the right-hand side, it becomes _0^2 + _1 - 1 + _0 (_1+1) + _2 + _0 _2 + _1 - 1 + _3 + ∑_ = 3^ ( _0 _ + _ + 1 + _ - 1) = _0 ( ∑_ = 0^_ +1 ) + 2 ( ∑_ = 1^_ - 1 ) = 0. We can therefore remove the last equation and obtain a square system of the form A = b, where A = à + 1, with à a tridiagonal Toeplitz matrix with zeros on the diagonal, -1 on the subdiagonal, and 1 on the supradiagonal, and = (_0, …, _ - 1). We show that A has full rank to conclude. Using the matrix-determinant lemma <cit.>, we gain (A ) = (Ã) + 1 (Ã). Straightforward computations deliver (Ã) = 12 (1+(-1)^) and 1 (Ã) = 12(1-(-1)^, -1-(-1)^, …), thus (A) = 12 (1+(-1)^ + ∑_ = 0^ - 1 ((-1)^ - (-1)^)_ ) = 12 (1 + ∑_ = 0^ - 1 ) = 12(1 + 2^ - 1) = 2^ - 1≥ 1 > 0. The unique solution _0, …, _ - 1 is used and, equations from (<ref>) yield ∑_ = 0 ≠ 1^ - 1__^ + (_1 - 1) _1^ = ∑_ = 0^ - 1__^ - ( (_0 + _0 ) _1^ - 1 + (_1 + _1 - 1) _2^ - 1 + ∑_ = 3^ (_ - 1 + _ - 1) _^ - 1 ) + (_0 - _0) _0^. Back into (<ref>) , we obtain _0^ +1 = 12 ( ∑_ = 0 ≠ 1^ - 1 (_ + (1-)_)_^ + (_1 + 1 + (1-)_1) _1^ ) + - 12 ((_0 + _0 ) _1^ - 1 + (_1 + _1 - 1) _2^ - 1 + ∑_ = 3^ (_ - 1 + _ - 1) _^ - 1 ) + 2 ( ∑_ = 0 ≠ 1^ - 1_(_^) + (_1 - 1) (_1^) ) + _0^ + 1 + 1-2 (_0 -_0) _0^. By formally computing the coefficients for different , we can prove that the solution of (<ref>) for ≥ 2 is explicitly given by _0 = -21, _1 = 1 - - 32, _ = (-1)^--2 + 1, ∈2⌊-12⌋ - 1, _ - = (-1)^ + 1 - - - 1, ∈1⌊-12⌋ + 1 + 1 + (-1)^2, where np are the coefficients of the Catalan's triangle. We observe that _0 - _0 = 2, _0 + _0 = 2( - 1), _1 + _1 - 1 = -(-1)(-2). We now deal with (<ref>). For this condition, we have to rely on the specific problem at hand, in particular a linear problem with () =. The difficulties in establishing a more general result come from the fact that this boundary condition is constructed enforcing equilibrium in the future (time + 1), thus strongly depends on the choice of . The corresponding scheme at the boundary away from the initial time is conjectured using optimization. This boils down to set, ignoring the source terms for the sake of presentation, that for ≥ 2, the conserved moment on the scheme fulfills the constraint _0^ + 1 = ∑_∈_ (, )_^ + ∑_∈_ (, ) _^ - 1, where / is the Courant number, the coefficients _ and _ are polynomials in and , and run simulations with random initial data for different and , each time minimizing ∑_ = 3^/ |_0^ - (∑_∈_ (, )_^-1 + ∑_∈_ (, ) _^ - 2 ) |^2. Consider a linear problem with () = and the scheme given by (<ref>), (<ref>), (<ref>), the inflow boundary condition (<ref>), and the outflow boundary condition (<ref>). Then, the corresponding scheme on the conserved moment reads, for the first time step _0^1 = 14(1+2 + ^2^2) _0^0 + 12(1-) _1^0 + 14(1-^2^2) _2^0 + _0^1, _^1 = 12(_ - 1^0 + _ + 1^0) + 2 (_ - 1^0 -_ + 1^0), ∈1 - 2, _ - 1^1 = (1). For the second time step _0^2 = 116( ^4^4 + 2 ^3^3 - 4 ^2^2 - 2 + 3 + 2 ^3^3 + 6 ^2^2 + 6 + 2)_0^0 - 14( ^3^3 + ^2^2 - - )_1^0 - 116( ^4^4 - 6 ^2^2 + 5 + 2 ^3^3 + 2 ^2^2 + 6 - 10)_2^0 + 18( ^3^3 + ^2^2 - - - 2 ^2^2 + 2)_3^0 + 14(^2^2- + 2+2)_0^1 + _0^2. _1^2 = 18(^3^3 + ^2^2 - - + 2 ^2^2 + 4 + 2)_0^0 - 12(^2^2 - )_1^0 - 18(^3^3 - ^2^2 - + + 2 ^2^2 - 2)_2^0 + 14(^2^2 - - 2 + 2)_3^0 + 12( - + 2)_0^1 _^2 = 2-2(_ - 1^1 + _ + 1^1) + ( - 1) _^0 + 2 (_ - 1^0 - _ + 1^0), ∈3-2, _ - 1^2 = (2) And eventually in time ≥ 2 _0^ + 1 = 14 ( +1)(( - 1) + 2) _0^ + (-12 ( + 1) + 1) _1^ - 14 ( +1) (( + 1) - 2) _2^ + 12 ( +1) (-1)_0^-1 - 12 ( +1) (-1)(-2)_1^-1 + _0^ + 1 - (-1)^2 _0^ - 1, _^ + 1 = 2-2(_ - 1^ + _ + 1^) + ( - 1) _^ - 1 + 2 (_ - 1^ - _ + 1^), _ - 1^ + 1 = ( + 1). with ∈1 - 2. Looking at (<ref>), due to the presence of quadratic terms in , we see that a possible proof of <Ref> would be non-standard. In the standard proofs germane to <Ref> and <ref>, the dependence on could only be linear because stemming from linear terms in . § CONSISTENCY OF THE BOUNDARY CONDITIONS Lattice Boltzmann schemes being turned into methods, we now study their consistency, theoretically, in <Ref>, using modified equations <cit.>. This analysis indeed clarifies the role of initialization, which may lead to a loss of order, that we eventually correct in <Ref>, introducing ad hoc source terms. We corroborate these findings through numerical experiments as presented in <Ref>. §.§ Modified equations We first notice that, contrarily to physical inflow boundary conditions, schemes enforcing numerical outflow boundary conditions can lose one order of consistency without compromising the overall order of accuracy, as claimed in <cit.>. This is important while pondering the following two propositions. Consider the scheme given by (<ref>), (<ref>), (<ref>), and the outflow boundary condition (<ref>). Take _0^ = 0 for ∈. Then, the modified equations obtained using the corresponding scheme from <Ref>, computed at the outflow = 0, are as follows. * For = 1: ∂_(0, 0) + 12∂_ (() - ) (0, 0) = , ∂_(, 0) + 2( - 2)∂_(, 0) + 2∂_(())(, 0) = , > 0. * For ≥ 2: ∂_(, 0) + ∂_(())(, 0) = , ≥ 0. The proof of <Ref> is given in <Ref>. Consider the scheme given by (<ref>), (<ref>), (<ref>), and the outflow boundary condition (<ref>). Take _0^ = 0 for ∈, and a linear flux () =. Then, the modified equation obtained using the corresponding scheme from <Ref>, computed close to the outflow = 0, are as follows. For (<ref>) ∂_(0, 0) + 12 (^2 + - 2)∂_(0, 0) = . For (<ref>), we obtain ∂_(0, 0) + 116 ( 2^3^2 + 8 ^2 + 6 - 16 + (^4^3 - ^3^2 - 7 ^2 + + 6 ) )∂_(0, 0) = . For (<ref>), we have ∂_(0, ) + 18 ( 2^2 + 6 - 4 + (^3^2 - 2^2 - + ) ) ∂_(0, ) = . Eventually, for (<ref>), we have ∂_(, 0) - 1 - /2 (/ + 1 ) + ( - 1 + / )/1 + (/ + 1 ) + ( - 1 )∂_(, 0)= , > 0. The previous consistency study relies on corresponding schemes. Though one may wonder whether the analysis could be carried out on the original scheme, the answer is negative. The use of procedures relying on the quasi-equilibrium for , such as Chapman-Enskog expansions, Maxwell iteration <cit.>, <cit.> or equivalent equations <cit.> at the outflow produces wrong results, which are in particular independent of (indeed the one for the equilibrium situation, where = 1), and neither match the conclusions from the corresponding scheme nor the numerical experiments. We tried to circumvent these drawbacks by applying the scheme twice and observing it every two time steps, in the spirit of <cit.>, without a conclusive outcome. §.§ Boundary sources to compensate initializations at equilibrium §.§.§ Outflow condition (<ref>) Looking at (<ref>), we remark that, whenever ∈ (0, 2), neither the modified equations (<ref>) nor (<ref>) describe the behavior of a scheme consistent with the target equation (<ref>). However, this does not reduce the order of the method, which is just first-order accurate—as observed in <cit.>—because outflows can be dealt with using one order less without impact. When = 2, with the bulk method becoming second-order accurate, (<ref>) indicates that the boundary scheme away from the initial time is first-order accurate, which is fine. Nevertheless, the initial boundary scheme is not consistent, as seen by (<ref>), and shall cause order reduction when the initial datum fulfills ∂_(0) ≠ 0. In this circumstance, the order of convergence will be 3/2 in for the L^2-norm. This can be seen in the following way: construct the local truncation error ϵ_^(, ) - _^. Assume that we are solving the linear advection equation, so that ϵ_^ fulfills the same numerical scheme as the solution _^ with suitable source terms, all proportional to ^2 except for the one—denoted by f_0^1—concerning ϵ_0^1, which is of order . Imagine, to simplify, to consider a semi-infinite problem with ∈ and that the assumptions in <cit.> are fulfilled. For smooth initial data, we have f_^0 = 0 and f_^1 = -2( + 1) ∂_(0) χ_ = 0 + ^2, whereas all other truncation error is ^2 and thus negligible in what follows. Using <cit.> (with zero interior forcing term, which is true at leading order), we obtain √(∑_∈|ϵ_^|^2)≤sup_ℓ∈√(∑_∈|ϵ_^ℓ|^2)≤√(C ( ∑_∈|f_^0|^2 + ∑_∈|f_^1|^2 ))≤√(C)^3/22 | + 1 | |∂_(0)| + ^2, where the second inequality we employed relies on the fact that the scheme is stable in the L^2 norm, which is not true for any other L^p norms. We now construct a source term _0^ to achieve second-order accuracy when = 2. To is done requesting that the initial scheme at the boundary (<ref>) be an upwind scheme, obtaining _0^1 = 12(_0^0 - _1^0) + 12 ((_0^0) - (_1^0)). Since the bulk scheme at the boundary (<ref>) is first-order consistent, which is fine, we want to perturb it as little as possible. We thus simply enforce _0^ + 1 + (1-)_0^ = 0, hence _0^ = ( - 1)^ - 1_0^1 for ∈. §.§.§ Outflow condition (<ref>) As far as (<ref>) is concerned, which shares the same issue with (<ref>), we present the procedure only in the linear case. Nevertheless, the dependence of the bulk scheme at the boundary (<ref>) on the source term _0^ is also used in a non-linear context, for we can compute the initial schemes corresponding to the choice of initial datum and we notice that the coefficients in from of _0^ in (<ref>) do not depend on , the advection velocity. Source terms on the boundary are tuned to obtain an upwind scheme at the boundary initial scheme (<ref>) and an upwind scheme—applied twice—for the second scheme at the boundary (<ref>). This gives _0^1 = 14 ((-^2^2+2+3)_0^0 + (-2 - 2) _1^0 + (^2^2 - 1)_2^0), _0^2 = (12++^22^2 + 4(1 - ^2^2)) _0^0 + 18( 2 - 12 - 14^2^2 + 3(-1-+^2^2 + ^3^3)) _1^0 - 14 (2 - 2 - 4^2^2 + (-1 + ^2^2)) _2^0 - 18 (2 - 2^2^2 + (-1- + ^2^2 + ^3^3 ))_3^0. Under this choice, it is easy to see that also (<ref>) becomes first-order accurate. We shall therefore take _0^ = (-1)^ - 2_0^2 for even and _0^ = (-1)^ - 1_0^1 for odd. §.§ Numerical simulations We now verify the findings from <Ref> and <ref> using the original algorithm. We first test convergence for the advection equation, using = 1, = -1/2, and measuring the L^2 error at the final time = 1. The initial datum is given by () = sin(). The results in <Ref> are in agreement with the expected convergence rates. In particular, we observe the order 3/2 whenever = 2 and (<ref>) as well as (<ref>) are employed without source terms. When corrections are used, we see that the error constant for (<ref>) is slightly better than the one for (<ref>). We also showcase a non-linear problem, simulating the solution of the Burgers equation, with () = -^2/2 with = 1. We take = 1, final time = 0.2, and initial datum () = 1/2 + 1/2×tanh ( 21-4^2 ), if |2| < 1, sign (2), if |2| ≥ 1, which ensures that the left boundary is an outflow and the right boundary an inflow. Once again, the results in <Ref> agree with the theoretical predictions. § STABILITY OF THE BOUNDARY CONDITIONS Stability is another cornerstone in the analysis of numerical schemes. We start—in <Ref>—by using the GKS theory, which considers decoupled problems for each boundary, set on the half-line. The numerical validations presented in <Ref> feature results that are sometimes surprising in view of the GKS analyses. In these circumstances, in <Ref>, we propose alternative tools, namely the so-called “matrix method” and pseudo-spectra, to provide an alternative point of view on this matter. Throughout the section, we consider a linear problem with () =. In this context, we have seen that the bulk scheme reads _^ + 1 = _-1_ - 1^ + _1_ + 1^ + _0 _^ - 1, ∈1 - 2, where _±1 = 12(2-∓) and _0 = - 1. All the considered outflow boundary conditions (<ref>) and (<ref>) recast as boundary schemes of the form _0^ + 1 = ∑_ = 0^ - 1__^ + ∑_ = 0^ - 1__^ - 1, where , are independent of . For the stability constraints under unbounded domain or periodic boundary conditions are the conditio sine qua non to be met even in presence of non-trivial boundary conditions, let us provide them. The stability conditions for (<ref>) on an infinite domain or with periodic boundary conditions in the L^2 norm are ∈ (0, 2) and ||≤ 1, or = 2 and ||<1. §.§ GKS stability The GKS theory stipulates that we can study the stability of each boundary condition by looking at the corresponding semi-infinite problem on [0, +∞), hence we take ∈. Several slightly different definitions of GKS stability exist, sometimes known as “strong stability” <cit.>. We are going to consider the one introduced in <cit.>, <cit.>, and <cit.>. Adapted to the context of the present paper, it reads: it exists α_0 ≥ 0 and C > 0 such that, for , small enough ( α-α_0/1+α ) ∑_ = 2^+∞ e^-2α |_0^|^2 + ( α-α_0/1+α )^2 ∑_ = 2^+∞∑_ = 0^+∞ e^-2α |_^|^2 ≤ C ( ( α-α_0/1+α ) ∑_ = 2^+∞ e^-2α ( + 1) |g_0^|^2 + ∑_ = 2^+∞∑_ = 1^+∞ e^-2α (+1) |f_^|^2 ), for every α > α_0, defined for zero-initial data, meaning _^0 = _^1 = 0. In (<ref>), g_0^ represents a source term in the boundary scheme, whereas f_^ encodes a source term in the bulk scheme. Let us provide some remarks on this definition of stability. * Quantities in (<ref>) are summed throughout time, since this approach relies on a Laplace transformation. * The estimate (<ref>) contains the decay factor e^-2α. This could also make solutions exploding with having finite sums. * The previous stability estimate must hold only for and small enough. To the best of our knowledge, few authors pointed out this fact, see Trefethen <cit.>. There could be cases where a scheme is GKS-stable—for and small enough—but where instabilities are observed for finite—and generally large—. * It has been conjectured that GKS stability may imply L^2 stability, <cit.>. The strength of GKS theory is that it allows theoretical stability/instability proofs by a normal mode analysis that we shall consider, and is able to provide necessary and sufficient conditions—at the price of being quite involved. Practically, to check whether (<ref>) holds, we first introduce the -transformation, which reads () = ∑_∈^-^, assuming ^0 = 0. §.§.§ Inside the domain The -transformed bulk scheme (<ref>) is _() = _-1_-1() + _1 _+1() + _0 ^-1_(), ∈, sometimes named “resolvent equation”. Introducing the ansatz _() = ^ yields the characteristic equation () = _-1^-1 + _1, with () - _0 ^-1. Notice that (<ref>) should be intended as a quadratic equation on = (), and could be obtained inserting the modal ansatz _^ = ^^ into (<ref>). The equation (<ref>) is the dispersion relation of the scheme when taking = e^iω and = e^i, <cit.>. Each mode (, ) ∈×—where gives the time dynamics and the space structure—can be classified <cit.> according to its position with respect to the unit sphere. Under the stability conditions given by <Ref>. * When _-1 = 0, thus for = 21-∈ [1, 2] (and ∈[-1, 0]), the characteristic equation (<ref>) has one solution () such that |()| > 1 for ||>1. * When _1 = 0, thus for = 21+∈ [1, 2] (and ∈[0, 1]), the characteristic equation (<ref>) has one solution () such that |()| < 1 for ||>1. * Otherwise, the characteristic equation (<ref>) has two solutions () and (), such that |()| < 1, for ||>1, (stable root), |()| > 1, for ||>1, (unstable root). Furthermore, setting (, ) _-1/_1, we have (± 1) = ±(, ), if < 0, ± 1, if > 0. (± 1) = ± 1, if < 0, ±(, ), if > 0. (± (1-)) = ± 1, if < 0, ±(, ), if > 0. (± (1-)) = ±(, ), if < 0, ± 1, if > 0. Remark that, regardless of the sign of , the group velocity (that can be computed when the scheme is dispersive, when = 2) <cit.> is positive for (meaning it is right-going), and negative for (meaning it is left-going), for the values presented in <Ref>. Instabilities arising at the outflow are associated with right-going modes propagating inside the domain. When ∈ (0, 2), we cannot have, for the values presented in <Ref>, both and on the unit circle, <Ref> still to be stated, which suggests that there is little interest in considering the concept of group velocity in these circumstances. Observe that the value = 2/1 + || has already been found when dealing with the monotonicity and L^∞ stability of the scheme at hand, see <cit.>. Let us distinguish different situations according to the degree of (<ref>) as equation in . * When _-1 = 0, thus for = 21-, we see that stability commands that ∈ [1, 2] and ∈[-1, 0]. The equation (<ref>) can be explicitly solved to yield () = -1/2 + +1/2^-1. One easily sees that (1) = 1, hence we set = 1+ϵ with 0< ϵ≪ 1 and = 1+δ, and perform a perturbation analysis to understand the sign of δ. This provides δ = - ϵ>0, hence we have that |()|>1 for ||>1 (roots are continuous functions of the parameter ), and we can therefore call by the nickname . * When _1 = 0, thus for = 21+, we see that stability commands that ∈ [1, 2] and ∈[0, 1]. The equation (<ref>) gives () = ( +1/2 + -1/2^-1 )^-1, and thus (1) = 1. Setting = 1+ϵ with 0< ϵ≪ 1 and = 1+δ, we have once again δ = - ϵ<0, hence |()|<1 for ||>1 and we can call by . * Otherwise, <cit.> ensures—thanks to stability—that roots split into two groups, those of modulus smaller than one and those of modulus larger than one when || > 1. Furthermore, <cit.> ensure that each group contains one root, called and respectively. Solving the quadratic equation (<ref>) gives (± 1) = {1, -1, , -} and the same perturbation as before allows to conclude. The same holds for = ±(1-). Under the stability conditions given by <Ref>, the function (, ) has the following properties. * Domain of definition. * For < 0, it is defined for all . * For > 0, it is defined for all except at = 2 + 1∈ [1, 2], where a vertical asymptote exists such that lim_→ ( 2 + 1)^±(, ) = ∓∞. * Monotonicity. For < 0 (resp., > 0) the function is monotonically decreasing (resp., increasing) in . * Sign. 1cFor < 0 1cFor > 0 (, ) ≤ 0, for 21-≤≤ 2, (, ) > 0, for 0 < < 21-. (, ) < 0, for 21+< ≤ 2, (, ) > 0, for 0 < < 21+. * Key values and bounds. We have (0, ) = 1 and (2, ) = -1. Moreover, ()|(, )|>() for ∈ (0, 2). Now that we have classified and characterized the roots of the characteristic equation (<ref>) through Lemmas <ref> and <ref>, we look for general solutions of the resolvent equation (<ref>), which—see <cit.>—are of the form _() = ()()^ + ()()^, ∈, as long as () and () are distinct.[Otherwise, small modifications in the spirit of linear multistep recurrences need to be introduced.] Since we would like that (_())_∈∈ L^2() for every ||≥ 1, we take ()= 0. This yields a so-called “admissible solution” <cit.>, reading _() = ()()^. Indeed, _() now fulfills the following definition. Let _() fulfill the resolvent equation (<ref>). Then, _() is said to be an “admissible solution” if * When ||>1, then (_())_∈∈ L^2(). * When || = 1, then _() is the limit of an admissible solution defined for arguments of modulus strictly larger than one. Otherwise said, we have _() = lim_δ→ 0^+w_( (1+δ)) with (w_( (1+δ)))_∈∈ L^2() for all δ > 0. §.§.§ At the outflow boundary The admissible solution features—in our case—one free parameter (). One thus considers the -transformation of the boundary scheme (<ref>) _0() = ∑_ = 0^ - 1__() + ^-1∑_ = 0^ - 1_^ - 1() that allows to determine the arbitrary constant. The so-called “eigenvalue problem”, <cit.> is made up of the transformed bulk equation (<ref>) plus the transformed boundary scheme (<ref>). An admissible solution, satisfying the boundary equation (<ref>), and not identically zero is called “eigensolution”, <cit.>. Eventually GKS stability/instability is checked using the following result. A scheme with boundary, tackling the transport equation and being L^2-stable in its periodic version, is GKS-stable (i.e. (<ref>) holds) if and only if it does not admit any non-trivial admissible solution satisfying the homogeneous boundary condition (i.e. an eigensolution). We are now able to study GKS stability for the outflow boundary conditions that we have considered. For (<ref>), the following result allows to conclude. For ≥ 1, the eigenvalue problems associated with (<ref>) have the following roots: (, ) = (1, 1) and (, ) = ( - 1, -(, )), and for ≥ 2, (, ) = (1-, 1). Moreover, when = 1, 2, all the roots of the eigenvalue problems are those listed above. Let us start by the case = 1. The -transformed boundary scheme is _0() - 2 (1+)_0() - 12(2--) _1() = 0 Inserting the ansatz _() = ^ gives - 2 (1+)- 12(2--) () = 0. We solve this equation (<ref>) in (), replace this into (<ref>), and eventually solve in , obtaining = 1, -1. Replacing into () given by (<ref>), we obtain (1) = 1 and ( - 1) = -(, ), as claimed. Let us now consider = 2. We consider the equation ^2 - ((1+) + (1- - ) ()) + (1-) () = 0 in () and plugging into the bulk equation (<ref>) provides the critical values = 1, ±( - 1). Into the solution () of (<ref>), we obtain (1) = 1, ( - 1) = -(, ), and (1-) = 1. Let us finish by studying the general case ≥ 2, were we have to analyze = 12 ( ∑_ = 0^ - 1_^ + (1-) ∑_ = 0^ - 1_^ + ) + - 12 (∑_ = 0^-1_^ + 1 + ∑_ = 0^-1_^ + 1 - ^2 )^-1 + 2 ( ∑_ = 0^ - 1_^ - ). The first and last roots are found by setting = 1 = 12 ( ∑_ = 0^ - 1_ + (1-) ∑_ = 0^ - 1_ + 1 ) + - 12 (∑_ = 0^-1_ + ∑_ = 0^-1_ - 1 )^-1 + 2 ( ∑_ = 0^ - 1_ - 1 ), hence ^2 + ( - 2) + 1- = 0, so finally = 1 and = 1-. The intermediate root can be checked by setting = - 1 and = (, ) and proving the claim by recurrence over , using the definition of the Catalan triangle. The only thing that we are left to understand is—utilizing <Ref>—whether the roots in <Ref> correspond to , leading to a GKS-unstable scheme, or to , ensuring stability. The GKS stability of the boundary condition (<ref>) is as follows. * For = 1. When  < 0, GKS-stable. When  > 0, GKS-unstable, with unstable mode  = 1, = = 1. * For = 2. When  < 0, GKS-stable for ∈ (0, 2). GKS-unstable for  = 2 with unstable mode  = -1, = = 1. When  > 0, GKS-unstable, with unstable mode  = 1, = = 1. * For ≥ 3. When  < 0, (probably) GKS-stable for ∈ (0, 2). GKS-unstable for  = 2 with unstable mode  = -1, = = 1. When  > 0, GKS-unstable, with unstable mode  = 1, = = 1. We notice that, in the case < 0, dissipation (i.e. ∈(0, 2)), rules out instabilities, as emphasized at the beginning of <cit.>. However, this is true for small enough, and we will see that on actual computations, interactions between boundaries and the size of the problem can stabilize/destabilize. The result for = 1 is interesting, for a Neumann boundary condition applied to a leap-frog scheme is unstable <cit.>. In our case, the boundary condition resulting from imposing a Neumann boundary condition on the incoming distribution function ^+ is different from the same condition on the conserved variable . Finally, employing the outflow boundary conditions when the boundary is unfortunately an inflow, that is > 0, entails instability regardless of the numerical dissipation. We now tackle the analysis of (<ref>). Regrettably, the procedure shall not give fully analytical results but is complemented by numerical studies. The transformed boundary equation reads ( + 1) ( 12 (2- + ) + 12(2- - )()^2) + … = 0, were dotted terms are not made explicit for the sake of compactness. In the term listed in (<ref>), we recognize part of the bulk equation (<ref>). Replacing this part gives a first order equation in (), providing () = ( + 1)^2 - ( + 1)-2^2/(+1)^2 - ( + 1)^2 - 2( + 1) + (( + 1) - 2) + + 1. Inserting this expression into the bulk equation (<ref>) gives a sixth-order polynomial equation in . Explicit computations show that = 1, ±(1-) are three out of the six roots of this polynomial. Inserting these values into (<ref>) gives (1) = 1, (1-) = (, ), ( - 1) = - (, ). * When < 0, the roots from (<ref>) correspond, for every unstable candidate = 1, ±( 1-), to rather than . These modes are consequently stable—as they do not correspond to admissible eigensolutions. We shall have to check the remaining three roots to conclude on stability. * When > 0, we see that the outcome of (<ref>) corresponds to a full set of admissible eigensolutions. Hence the boundary condition is GKS-unstable, and we can stop the study here. From now on, we assume that we are in the case < 0, where we hope proving GKS stability. Factoring the roots = 1, ±(1-) out from the sixth-order polynomial equation, we are left with a third-order equation, which reads, excluding the trivial case where = -1: 4 ^3- (^2 + 2 - - 2)^2 + (2 ^2^2 - ^2 - 2 ^2 + 2 - 4 ^2 - 2 + 7 - 2) -2 ^3 + 4 ^2 - 2 ^3 - 2 + 4 ^2 - 2 = 0. Unfortunately, is it hard to analytically study the roots of (<ref>) for general and . We can numerically check that the condition is stable for given values of , see <Ref>. Here, the “solitary” real root _3 always lays inside the unit disk without causing instability. The same happends for the two complex conjugate roots _1 and _2 for small ||. When their modulii exceed one for || close to one, we see that the modulii of the corresponding 's obtained by (<ref>) are larger than one, so they are indeed not , but . The boundary condition is thus stable. §.§ Numerical simulations Knowing whether boundary conditions are GKS-stable or unstable, we propose a numerical study to assess the relevance of these analyses applied to the actual behavior of schemes. We simulate using the original scheme on a domain of length = 1. The initial datum is _^±, 0 = 12(1±)(-1)^. This checkerboard datum has been chosen to develop possible instabilities quickly. On the right boundary, ≡ 0. We start using Courant number = -1/2. * Fine resolution, using = 1000 and = 5. In <Ref>, we plot the final solution and the absolute value of the solution at the boundary cell 0 as function of time, for ∈ (0, 2]. The results are in agreement with the GKS stability analysis: both (<ref>) and (<ref>) are stable for any value of , whereas (<ref>) for ≥ 2 is unstable for = 2 and stable otherwise. * Coarse resolution, using = 30 and = 100. The results in <Ref> are provided for < 2. Again, both (<ref>) and (<ref>) are stable. Still, quite surprisingly, we see that (<ref>) for ≥ 2 can be unstable for values of < 2 quite close to two, which is not predicted by the GKS stability analysis. Moreover, this instability seems to be exponential in . This phenomenon will be studied in <Ref> using the spectra of the scheme matrix, which must replace the GKS theory when the size of the problem is small. Even if this is not much physically funded, we consider = 1/2. We employ a coarse resolution = 30 and a final time = 100. The results are given in <Ref> using = 1.6. Somewhat surprisingly, we see that the cases (<ref>) and (<ref>) remain stable—contrarily to what GKS instability predicts. Notice that <cit.> has already pointed out that GKS-unstable schemes can be L^2 stable. For (<ref>) with ≥ 2, we observe instabilities as expected, with growth ∝^ - 1 for large . These facts will be explained—as well—using spectra. §.§ Matrix method and pseudo-spectra In <Ref>, simulations provided surprising results with respect to the GKS stability analysis. In order to explain this behavior, we use the so-called “matrix method” <cit.>, where the numerical scheme is represented by a matrix, whose powers we would like to bound. The main difficulty is to do so uniformly in . We also provide plot of the so-called pseudo-spectra <cit.>. Let us first introduce different matrices. * Original matrix. The iteration matrix associated with the original lattice Boltzmann scheme—endowed with boundary conditions—can be written both using the distribution functions ^± or based on and . Introducing ^ = (_0^+, , …, _-1^+, , _0^-, , …, _-1^-, ), we write ^ + 1 = ^, with = [ [ ++ +-; -+ – ] ] ∈2, whose blocks are given in <Ref>. The matrix is a block banded quasi-Toeplitz. When using (<ref>) and for large enough compared to , the matrix is singular, both due to the outflow boundary condition and the inflow boundary condition. We cannot use the Perron-Frobenius theorem to characterize the spectrum of since the matrix features negative coefficients due to the inflow condition, and is reductible. * Corresponding matrix. Away from initialization, introducing the solution vector ^ = (_0^, …, _-1^, _0^ - 1, …, _-1^ - 1) spanning two time-steps, we can write the corresponding scheme—taking vanishing boundary data—as ^ + 1 = ^, with = [ [ ; ] ] ∈2. The matrices and are quasi-Toeplitz (at most tridiagonal in their Toeplitz part) given by = [ _0 _1 _2 ⋯ _-1; _-1 0 _1 ; ⋱ ⋱ ⋱ ; _-1 0 _1; 0 ⋯ 0 0 0 ], = [ _0 _1 _2 ⋯ _-1; 0 _0 0 ; ⋱ ⋱ ⋱ ; 0 _0 0; 0 ⋯ 0 0 0 ], so that could be grandiloquently called “block banded quasi-Toeplitz companion”. The explicit determination of the eigenvalues of is out of reach. Indeed, using the formula for the determinant of a block matrix and a row permutation, we come to (2 - ) = (( - ) - ) = (^2 - - ). This emphasizes that we face a quadratic eigenvalue problem <cit.>. For has real entries all its eigenvalues are either real or show up as pairs of conjugate complex numbers. In this case, the singularity of comes from the inflow boundary condition but generally not from the outflow. An iterative procedure <cit.> to study whether the roots belong to the unit sphere, or—upon performing a change of variable <cit.>—using the Routh-Hurwitz criterion, are too involved and just prescribe ∈ (0, 2]. For this will be useful in what follows, we can see as a perturbation of the matrix ∈2 with the same block structure as in (<ref>), with and replaced by their Toeplitz versions and , which boils down to replace the first and last row with the patterns that are inside the matrices. This reads = + 1 + , where (_0, _1 -_1, _2, ⋯,_-1, _0 - _0, _1, ⋯,_-1) and (0, ⋯, 0, -_-1, 0, 0, ⋯, 0, -_0). Since is a finite-rank perturbation of the block banded Toeplitz companion matrix , which can be seen as an operator in the limit →+∞ <cit.>, in this limit, the spectrum of will be the one of plus possibly a set of isolated points, stemming from boundary conditions, <cit.>. * Finally, we also consider the block circulant banded Toeplitz companion version , where boundary conditions are replaced by periodicity. With all these different matrices at our disposal, one first question concerns the difference between and in terms of spectra. Which one do we need to consider? Discrepancies, if any, come from non-trivial boundary conditions, because in the periodic framework, everything is the same <cit.>. From <Ref>, we see that the only difference to expect may concern the eigenvalue ≈ 1- which might be in the spectrum of without being in that of . Taking for example the case (<ref>) with < 0, the reason for this eigenvalue is that, <cit.>, this boundary condition leads instabilities on the non-conserved variable when = 2. However, for our sole interest in on , we would like to filter this instability on which does not impact . Another difference, as pointed out in the previous <Ref> and <ref>, may show up in terms of multiplicity of the eigenvalue = 0, which replaces ≈ 1- when the latter disappears. Overall, in the sequel, we focus on the spectrum of . We now compare the spectrum of to the ones of and (and their asymptotic limit for → +∞, that we shall characterize), see <Ref>. Even with = 10, the spectrum of is very close to the asymptotic one. This is true to a lesser extent for , because separation between boundary independent spectrum (asymptotically coinciding with the one of ) and boundary dependent one is still weak. This means that there is still a residual coupling between boundaries. As expected, the asymptotic spectrum of encloses[This is not generally true at finite .] the asymptotic spectrum of <cit.>. We identify three spectral regimes for , according to : * ∼ 1, called unclustered, where eigenvalues are still not perfectly separated into two classes (boundary independent and dependent). Moreover, there can be a strong interaction between boundaries. * ∼ 10, called clustered non-asymptotic, where eigenvalues have separated into two classes but can still be slightly away from the limit structure and explains discrepancies from the GKS theory. * → +∞, called clustered asymptotic. There is a strong correspondence between the decoupling of boundaries in this regime and the fact that the GKS theory allows—provided that be small—to consider each boundary on its own. §.§.§ Asymptotic spectra for → +∞ Asymptotic spectra as →+∞ are way easier to characterize that those for finite . We start by the scheme matrix in presence of periodic boundary conditions . The spectrum of is given by () = {12 ( (_-1e^2π i k + _1e^2π i ( - 1) k ) ±√(( _-1e^2π i k + _1e^2π i ( - 1) k)^2 + 4_0) ) for k ∈0 - 1}. Its asymptotic spectrum as → +∞ reads () = {12 ( (_-1e^-i ϑ + _1e^i ϑ ) ±√(( _-1e^-i ϑ + _1e^i ϑ)^2 + 4 _0) ) for ϑ∈ [-π, π] }. It will be interesting to consider the special case = 2, where the round bean-like shapes described by (<ref>) visible on <Ref> degenerate to become bent segments. We obtain ()|_ = 2 = { -i sin( ϑ) ±√( -^2sin^2( ϑ) + 1) for ϑ∈ [-π, π] }. We follow the proof of <cit.>. Analogously to (<ref>), we face (2 - ) = (^2 - A^∘_ - B^∘_), where the matrix ^2 - A^∘_ - B^∘_ is a circulant banded Toeplitz matrix as sum of matrices with such property. Using a well-known result on the determinant of a circulant matrix, we obtain the characteristic equation (2 - ) = ∏_k = 0^ - 1 ( ^2 - (_-1e^2π i k + _1e^2π i ( - 1) k ) - _0 ) = 0. Solving each second-order equation within (<ref>), we deduce the spectrum given by (<ref>). The asymptotic spectrum as → +∞ can be found replacing e^2π i k/ with e^-i ϑ and e^2π i ( - 1) k/ with e^i ϑ, for ϑ∈ [-π, π], thus giving (<ref>). As far as is concerned, we are only able to characterize the asymptotic spectrum with the following result. Recall that the asymptotic spectrum of is also the part of that of which does not depend on boundary conditions. Under the stability conditions given by <Ref>, the asymptotic spectrum of is as follows. * If _-1 = 0, thus for = 21-∈ [1, 2] (and ∈[-1, 0]), then () = {±√(1+1-) = ±√( - 1)∈}, thus the limit set is made up of isolated points on the real axis. * Similarly, When _1 = 0, thus for = 21+∈ [1, 2] (and ∈[0, 1]), then () = {±√(1+1-) = ±√( - 1)∈}, * Otherwise () = {1/2 ( ( _-1√((, ))e^-i ϑ + _1√((, ))e^i ϑ ) ±√( ( _-1√((, ))e^-i ϑ + _1√((, ))e^i ϑ )^2 + 4_0) ) for ϑ∈ [0, π] }, where √(·) indicates the principal square root. In the case where the asymptotic spectrum is not reduced to two points, it is in general difficult to describe. An exception occurs when = 2, where ()|_ = 2 = { - icos( ϑ) ±√( -^2 cos^2( ϑ) + 1) for ϑ∈ [0, π] }, where the right-hand side is simply another parametrization of the limit profile in (<ref>). This entails that—from the spectral standpoint, in the limit →+∞, and when = 2—the periodic case behaves like the Toeplitz case . Coming back to generic , we can also make Taylor expansions of (<ref>) in the vicinity of ϑ = π2, yielding ±√( - 1) + _-1√((, )) (π2 - ϑ)+ (π2 - ϑ)^2, thus we see that the local shape of this limit set highly depends on the sign of - 1 and (, ) = _-1/_1. We follow <cit.>. This part of the spectrum is associated with distinct _a, _b∈ of same modulus |_a| = |_b|, both fulfilling the bulk characteristic equation (<ref>). Therefore, there exist a phase shift ϑ∈ [0, π] and ∈ such that || = |_a| = |_b|, so that _a = e^i ϑ and _b = e^-i ϑ. Inserting _a and _b into (<ref>) provides () = _-1^-1e^-i ϑ +_1e^i ϑ, () = _-1^-1e^i ϑ +_1e^-i ϑ, where we stress the fact that the same eigenvalue corresponds to different _a and _b. Solving and simplifying various factors yields _1 ^2 = _-1. One can easily see that = ±√((, )), if _1≠ 0, undefined if _1 = 0, where in the first case, we have ∈ whenever 0<<11+|| and ∈ i for 2/1+|| < < 2, see <Ref>. * When _-1 = 0, hence = 0, the asymptotic spectrum is given by ^2 + (1-) = ^2 - 1+/1- = 0, which gives (<ref>). * When _1 = 0, hence is not defined. We obtain the spectrum given in (<ref>). * Otherwise. We consider—without lack of generality—the principal square root with a plus sign in front, hence = √((, )). Hence, solving a quadratic equation, the spectrum satisfies (<ref>). We now go to study the isolated points due to the outflow boundary condition, thus excluding the eigenvalue = 0 and those unbound with boundary conditions. Under the stability conditions given by <Ref>. We have the following. * For = 1, the asymptotic spectrum of created by (<ref>) is given by () ∖ (() ∪{0} ) = ∅, if = 2, {-1}, if ∈ (0, 2)  and <0, {1}, if ∈ (0, 2)  and >0. * For = 2, the asymptotic spectrum of created by (<ref>) is given by () ∖ (() ∪{0} ) = ∅, if = 2, {-1}, if ∈ (0, 2)  and <0, {1, 1-}, if ∈ (0, 2)  and >0. * For ≥ 3, the asymptotic spectrum of created by (<ref>) includes () ∖ (() ∪{0} ) ⊃∅, if = 2, {-1}, if ∈ (0, 2)  and <0, {1, 1-}, if ∈ (0, 2)  and >0. This is coherent with <Ref>. We follow the procedure by <cit.>, which—as observed by these authors—is quite close to the GKS analysis that we have already performed. * Consider (<ref>). According to <Ref>, the candidate eigenvalues are = 1, - 1. We have, by virtue of <Ref>: = 1, = 1 (boundary and bulk), = (, ) (bulk), = - 1, = -(, ) (boundary and bulk), = -1 (bulk). In order to find the boundary dependent asymptotic spectrum, according to <cit.>, we have to compare—for the same eigenvalue —the boundary and exclusively bulk 's, so that the former be strictly smaller than the latter in modulus. * = 1, we want |1| = 1 < |-(, )| = |(, )|. By virtue of <Ref>, when < 0, the inequality cannot be fulfilled, whereas if > 0, then the inequality is met for ∈ (0, 2). * = - 1, we want |-(, )| = |(, )| < |-1| = 1. Again by virtue of <Ref>, when < 0, the inequality is met for ∈ (0, 2), whereas if > 0, then the inequality cannot be fulfilled. * Consider (<ref>) for ≥ 2. According to <Ref>, the candidate eigenvalues are = 1,±(1-). For ≥ 3, we cannot guarantee that these are the only candidates, but it is quite likely that this is indeed the case. We are just left with the last one. = 1-, = 1 (boundary and bulk), = (, ) (bulk). Proceeding as in the previous case yields the claim. §.§.§ A link between the reflection coefficient and the transition to →+∞ Let us finish with a result that connects the reflection coefficient introduced by Trefethen <cit.>—which stands in the realm of GKS theory—with the number of eigenvalues of relative to the outflow boundary condition that tend to a single point in the asymptotic spectrum. The reflection coefficient stems from inserting the general solution of the resolvent equation (<ref>) into the transformed boundary condition (<ref>), being defined by the ratio between the amplitude () of the mode radiating from the boundary towards the inner domain, and the amplitude () of the one radiating from the bulk towards the boundary. This gives ()()/() = - ∑_ = 0^ - 1_()^ - ^-1∑_ = 0^ - 1_()^/ - ∑_ = 0^ - 1_()^ - ^-1∑_ = 0^ - 1_()^. The reflection coefficient will be interesting in particular in the limit →, where is an isolated point in the asymptotic spectrum prescribed by <Ref>. Let ∈∖{ 0} be an isolated point of the asymptotic spectrum of prescribed by <Ref>. Assume that the stability conditions given by <Ref> are satisfied. Take ϵ > 0 small. Then, for large enough, the number of eigenvalues of , inside ϵ, is given as follows. * When _-1 = 0, thus for = 21-∈ [1, 2] (and ∈[-1, 0]), the number of eigenvalues of , counted with their multiplicity, inside ϵ, is the order of the zero of the function ↦^2 - _0 - _0/( - i√(-1))( + i√(-1)) at = . * Otherwise, the number of eigenvalues of , counted with their multiplicity, inside ϵ is the order of the zero of the function ↦ -∑_ = 0^ - 1_()^ - ^-1∑_ = 0^- 1_()^, at = , where () is a solution of the characteristic equation (<ref>) being * if _1 = 0, thus for = 21+∈ [1, 2] (and ∈[0, 1]), the only root () = _-1()^-1 = (2-)()^-1; * otherwise, the one such that it exists ∈∂ϵ (and we know that it certainly exist) such that lim_→+∞1/√(_-1_1)-1(()/2√(_-1_1))/(()/2√(_-1_1)) = ()/_-1, with being the Chebyshev polynomial of second kind of degree . The claim of <Ref> may look overly technical and maybe off-putting. However, what it heralds is rather straightforward: if close to , the root () in (<ref>) is indeed (), and the numerator in the reflection coefficient (<ref>) does not vanish, then the order of the pole of the reflection coefficient () at readily provides the number of eigenvalues of tending towards as →+∞. This defines a sort of multiplicity of the asymptotic eigenvalue , and easily predicts the growth rate of the solution when || = 1. The proof of <Ref> being quite long and technical, we provide the main steps and ideas beforehand. * Since the eigenvalues of a matrix are the roots of the characteristic polynomial, we use the Cauchy's argument principle from complex analysis—applied to the characteristic polynomial of —to count the number of eigenvalues of this matrix within the ball. This commands the study of the trace of the resolvent of . * The clustering and separation of the eigenvalues of when increases allows to cancel several contributions, relative to boundary independent eigenvalues and eigenvalues relative to the inflow. * Since boundary conditions enter into the resolvent of as rank-one perturbations of , (<ref>), we apply the Sherman-Morrison formula to separate different contributions. * Using the formula for the inverse of a block matrix made up of four blocks, the resolvent of can be expressed in terms of the inverse of a tridiagonal Toeplitz matrix. * The entries of the inverse of a tridiagonal Toeplitz matrix can be expressed in terms of Chebyshev polynomials of second kind, of which we can compute limits as increases. Let ∈∖{ 0} be an isolated point of the asymptotic spectrum of and ϵ > 0. The number of eigenvalues of inside ϵ, counted with their multiplicity, see <cit.>, is given by 1/2π i∮_∂ϵ_(2 - )/(2 - ) = 1/2π i∮_∂ϵ((2 - ))/(2 - ) = 1/2π i∮_∂ϵ((2 - )^-1). Using the Sherman-Morrison formula <cit.> ∮_∂ϵ((2 - )^-1) = ∮_∂ϵ((2 - - )^-1) + ∮_∂ϵ((2 - - )^-21)/1-(2 - - )^-11 =∮_∂ϵ((2 - - )^-21)/1-(2 - - )^-11 , where the transition from the second to the last row of (<ref>) comes from assuming that the considered 's are large enough for the chosen ϵ so that no eigenvalue linked with and the inflow is in ϵ. In the remaining integral of (<ref>), the only potential source of poles that would increment the count is in the denominator, which reads—using the Sherman-Morrison formula once again 1-(2 - - )^-11 = 1-(2 - )^-11 -(2 - )^-1(2 - )^-11/1-(2 - )^-1. We anticipate that—in (<ref>)—the last term on the right-hand side is a coupling term between eigenvalues relative to opposite boundaries, and thus shall eventually vanish whenever →+∞. Recalling that = ( - 1) and using the formula for the inverse of a block matrix made up of four blocks, we gain (2 - )^-1 = [ [ (() - )^-1 (() - )^-1(-1)^-1; ^-1(() - )^-1 ^-1 + ^-1 (() - )^-1 (-1)^-1 ] ]. The advantage of (<ref>) is that () - is a tridiagonal Toeplitz matrix, for which several results have already been demonstrated. Let ∈1. Equation (<ref>) entails that (2 - )^-1 = ∑_ = 1^_ - 1 (() - )^-1 - _1 2 (() - )^-1 +^-1 ( ∑_ = 1^_ - 1 (() - )^-1 - _0 1 (() - )^-1 ), and (2 - )^-1 = - _-1-1 (() - )^-1 - ^-1_0 (() - )^-1. When either _-1 or _1 equals zero[They are not simultaneously zero except in the trivial context where = 0 and = 2.], the matrix () - is indeed lower or upper triangular, as well as its inverse. Exact computations provide the following results. * If _-1 = 0, whence _1 = 2-∈ [0, 1], we gain (() - )^-1 = _1^ - ()^ - - 1χ_≥, and thus deduce that (2 - )^-11 = 0 and 1 - (2 - )^-1 = 1 + ^-1_0()^-1 = ()^-1. The first equality above means that—even at finite —the inflow does not impact the number of isolate eigenvalues linked with the outflow. Since we are integrating away from the origin, (<ref>) becomes 1-(2 - - )^-11 = 1-(2 - )^-11 = ^-1()^-1(^2 - _0 - _0) =^2 - _0 - _0/( - i√(-1))( + i√(-1)). Back into (<ref>), we finally obtain: ∮_∂ϵ((2 - )^-1) =∮_∂ϵ( - i√(-1))( + i√(-1))((2 - - )^-21)/^2 - _0 - _0. * If _1 = 0, we gain (() - )^-1 = _-1^-()^ - - 1χ_≥, thus (2 - )^-1 = 0 and 1 - (2 - )^-1 = 1 + ^-1_0()^-1 = ()^-1. Eventually (<ref>) becomes 1-(2 - - )^-11 = 1-()^-1 ( ∑_ = 0^ - 1_ (_-1()^-1 )^ + ^-1∑_ = 0^- 1_ (_-1()^-1 )^ - ^-1_0 ). Recalling that the bulk characteristic equation in () reads () = _-1()^-1, we compute 1-(2 - - )^-11 = ()/_-1 ( -∑_ = 0^ - 1_()^ - ^-1∑_ = 0^- 1_()^ ) + 1 + ()/_-1 (^-1_0 - )_=0, and finally obtain ∮_∂ϵ((2 - )^-1) = ∮_∂ϵ_-1((2 - - )^-21) /() ( -∑_ = 0^ - 1_()^ - ^-1∑_ = 0^- 1_()^ ) . We can forget about () since it never vanishes close to . Let us now consider the case where both _-1, _1≠ 0. In this case, generalizing <cit.> to the case the sign of _-1_1 can both be positive or negative, we obtain (() - )^-1 = _1^-/(√(_-1_1))^- + 1-1(() /2√(_-1_1)) -(() /2√(_-1_1))/(() /2√(_-1_1)), ≤, _-1^-/(√(_-1_1))^ - + 1-1(() /2√(_-1_1)) -(() /2√(_-1_1))/(() /2√(_-1_1)), > , for () ≠ 0. Here is the Chebyshev polynomial of second kind of degree . Since we have () = ( + √(^2 - 1))^ + 1 - ( - √(^2 - 1))^ + 1/2√(^2 - 1), and we can easily see that {∈ :  |±√(^2 - 1)|= 1 } = [-1, 1], {∈ :  |±√(^2 - 1)|>1 } = {∈ : ∓<0}∪{∈ i : ∓<0}∖[-1, 1], {∈ :  |±√(^2 - 1)|<1 } = {∈ : ∓>0}∪{∈ i : ∓>0}∖[-1, 1], we deduce that lim_→+∞ |()| = +∞, ∈∖ (-1, 1), does not exist, ∈ (-1, 1). We would like to investigate the fact that we are allowed to consider the previous limit in the setting that we are currently in—namely that of a contour integral on ∂ϵ. In particular, we would like to know if we are in the latter situation in (<ref>) only on a non-negligible set. Let ∈ (ϵ e^i) /2√(_-1_1) = ϵ + 1-/ϵ/2√(_-1_1)cos() + iϵ + - 1/ϵ/2√(_-1_1)sin(). * _-1_1 > 0, hence we would like to avoid 0 < ϵ^2 = 1-, which never happen if ≥ 1. When < 1, we can always avoid this situation by taking ϵ small enough. * _-1_1 < 0, essentially the same way of reasoning yields the same conclusions. We can therefore discard the case where we would integrate along (-1, 1), where the zeros of any Chebyshev polynomial dwell, on a non-negligible set and thus safely consider limits of Chebyshev polynomials of second kind when their degree tends to infinity. Let us consider the terms in (<ref>). (2 - )^-1 = ( _1/√(_-1_1) )^1/(() /2√(_-1_1)) [ ∑_ = 1^_ - 1(√(_-1_1))^-1/_1^-1(() /2√(_-1_1)) - √(_-1_1)/_11(() /2√(_-1_1)) + ^-1 ( ∑_ = 1^_ - 1(√(_-1_1))^-1/_1^-1(() /2√(_-1_1)) - _0/_1 ) ], where the terms enclosed within square brackets do not depend on , and (2 - )^-11 =- ( _-1/√(_-1_1) )^1/(() /2√(_-1_1)) ( √(_-1_1)/_-1 + ^-1_0/_-1 ), and finally (2 - )^-1 =-_-1_1/(√(_-1_1))^2-2(() /2√(_-1_1))/(() /2√(_-1_1)) - ^-1_0/√(_-1_1)-1(() /2√(_-1_1))/(() /2√(_-1_1)). Putting all together, we obtain lim_→+∞-(2 - )^-1(2 - )^-11/1-(2 - )^-1 = lim_→+∞ ( _-1_1/√(_-1_1) )^1/(() /2√(_-1_1))^2 [⋯]/1+_-1_1/(√(_-1_1))^2-2(() /2√(_-1_1))/(() /2√(_-1_1)) + ^-1_0/√(_-1_1)-1(() /2√(_-1_1))/(() /2√(_-1_1)), where the quantity [⋯] does not depend on . The numerator in the overall fraction tends to zero, for the Chebyshev polynomial tends to infinity with its degree →+∞ and we have |_-1_1/√(_-1_1) | = (|_-1 |_≤ 1|_1|_≤ 1)^1/2≤ 1, under the stability conditions by <Ref>. We are left to check the denominator in (<ref>). One can show that, for ≥ 1 finite, we have lim_→+∞ - ()/() = ( + √(^2 - 1))^, when ∈{∈ : <0}∪{∈ i : <0}∖(-1, 1), ( - √(^2 - 1))^, when ∈{∈ : >0}∪{∈ i : >0}∖(-1, 1), does not exist, when ∈ (-1, 1). This expression would make us thinking that the limit function is not continuous at = 0. This is not what happens and the way of separating different formulæ is just a matter of notational convenience, for we indeed have lim_→+∞ - ()/() = ()^ for ∈∂ϵ∖ (-1, 1) where () is the root of ()^2 - 2() + 1 = 0 such that it exists ∈∂ϵ∖ (-1, 1) characterized by the fact that lim_→+∞ - ()/() = ()^. This procedure is made possible since the roots of the polynomial equation depend continuously on , see for example <cit.>. Therefore, the denominator in (<ref>) reads lim_→+∞ 1+_-1_1/(√(_-1_1))^2-2(() /2√(_-1_1))/(() /2√(_-1_1)) + ^-1_0/√(_-1_1)-1(() /2√(_-1_1))/(() /2√(_-1_1)) =1+_1/_-1 ( () /2_1±1/2√( (() /_1 )^2 - 4_-1/_1) )^2 + ^-1_0/_-1 ( () /2_1±1/2√( (() /_1 )^2 - 4_-1/_1) ) =1+ ()/_-1 (^-1_0 + _1()) = _-1/(), where () is the solution of (<ref>) such that it exist (and by all the previous arguments, it certainly exist) ∈∂ϵ so that lim_→+∞1/√(_-1_1)-1(()/2√(_-1_1))/(()/2√(_-1_1)) = ()/_-1. This gives lim_→+∞-(2 - )^-1(2 - )^-11/1-(2 - )^-1 = 0, meaning that the coupling between left and right boundaries progressively fades away. Eventually, we are left with 1-(2 - )^-11 = 1 - ∑_ = 1^_ - 1_-1^ - 1/(√(_-1_1))^-(() /2√(_-1_1))/(() /2√(_-1_1)) + _-1_1/(√(_-1_1))^2-2(() /2√(_-1_1))/(() /2√(_-1_1)) -^-1 ( ∑_ = 1^_ - 1_-1^ - 1/(√(_-1_1))^-(() /2√(_-1_1))/(() /2√(_-1_1)) - _0 1/√(_-1_1)-1(() /2√(_-1_1))/(() /2√(_-1_1)) ), thus lim_→+∞1-(2 - )^-11 = 1 - ∑_ = 1^_ - 1()^/_-1 + _1()^2/_-1 -^-1 ( ∑_ = 1^_ - 1()^/_-1 - _0 ()/_-1 ) = ()/_-1 ( - ∑_ = 0^ - 1_()^ - ^-1∑_ = 0^ - 1_()^ ) + 1 + ()/_-1 (_1() + ^-1_0 - )_=0 = ()/_-1 ( - ∑_ = 0^ - 1_()^ - ^-1∑_ = 0^ - 1_()^ ). This finally yields the desired result: lim_→+∞∮_∂ϵ((2 - )^-1) =∮_∂ϵ_-1lim_→+∞((2 - - )^-21)/() ( - ∑_ = 0^ - 1_()^ - ^-1∑_ = 0^ - 1_()^ ). §.§.§ Plots of spectra and pseudo-spectra As the structure of the asymptotic spectrum of , closely linked with the GKS theory, has been described, it is time to understand the surprising results that we have gathered in <Ref>. These atypical outcomes can be indeed explained by finite dimensional effect with respect to , and understood through the presence of bulges in the pseudo-spectra and eigenvalues outside or on the unit circle. Generally, problems arise on the real axis, as spectra are symmetric with respect to it, so we look at perturbations of a target quasi-eigenvalue ∋∉() to be chosen according to the situation at hand. We would like to estimate ϵ so that = + ϵ∈(). Notice that ϵ must a priori be complex to ensure that the previous problem has a solution. Nevertheless, we will look for real approximations of it and their sign. Into the characteristic equation associated with , assuming |ϵ|≪ 1, and using a Taylor expansion, all this yields 0= (( + ϵ)2 - ) = (2 - ) + ((2 - )) ϵ + ϵ^2, thus, at leading order ϵ≈ - (2 - )/ ((2 - )) = - ( ((2 - )^-1))^-1 = - ( ∑_λ∈(2 - )1/λ )^-1 =- ( ∑_λ∈(2 - )1/λ )^-1. The approximation of ϵ in (<ref>) is linked to the harmonic mean of the real part of the eigenvalues of the “reminder” matrix 2 -, and is indeed the first iterate of a Newton's method for the eigenvalue problem, with initial guess is . For will be chosen in the asymptotic spectra, we expect that lim_→ +∞ϵ = 0. Still, the fact that ϵ converges to zero from above or below changes the situation and, when || = 1, the problem is very close to the one of checking <Ref>. The problem we face is linked with the one of distance to singularity, see <cit.> and <cit.>, though we are considering specific perturbations of the form ϵ2. Since we know that what happens is a shift of the eigenvalues, namely ((+ϵ)2 - ) = ϵ + (2 - ), and we would like that ((+ϵ)2 - ) include zero, making +ϵ an eigenvalue of , it is natural to set the equation 0 ≈ϵ + _λ∈(2 - ) |λ|, and hence estimate ϵ≈ - (_λ∈(2 - ) |λ| ). The estimations (<ref>) and (<ref>) generally yield different results. The former is the one giving analytical results in some cases, <Ref>. However, they are compatible in the limit of small |ϵ|, in the sense that in this regime, we expect 2 - be ill-conditioned, resulting in a strong separation between the modulii of its eigenvalues. Thus, in the sum defining (<ref>), we can neglect all the eigenvalues except the one with minimal modulus, and get (<ref>). Observe that both formulæ are in general affected by an odd-even decoupling in , for they target real perturbations ϵ, whereas real eigenvalues close to the could alternatively appear and disappear according to the parity of . In the case = 2, computations are often easier and much more can be said on spectra for small 's, in particular for = 1. This is the aim of the following result, proved in <Ref>. Let = 2 and consider (<ref>). * For = -1 (which is interesting whenever < 0), the estimation from (<ref>) reads ϵ≈ + 2/( + 3) + + 1, for  even, ^2/(^2 + + 2) + ^2 - - 2, for  odd. * For = 1 (which is interesting whenever > 0), the estimation from (<ref>) reads ϵ≈ -/(-1) + + 1. Let us now put all the previous analyses together to reinterpret the results from <Ref>. * < 0. We focus on the case = -1/2, = 1.98, and = 30, where unexpected instability has been observed. Hinting at <Ref>, we remark that the presence (resp. absence) of bulges in pseudo-spectra, protruding outside the unit circle close to = -√( - 1)≈ - 1 (this expression may lack justification at this stage), is coherent—as pointed out in <cit.>, with the GKS instability (resp. stability) of the boundary conditions at hand when = 2. In the stable contexts (<ref>) and (<ref>), the eigenvalues tending towards the left asymptotic cluster do so from inside the unit disk. Quite the opposite, in the unstable cases (<ref>) for ≥ 2, these eigenvalues converge to the limit object, located inside the unit disk, from outside, yielding the instability at fixed . More generally, when 21-≤≤ 2, issues arise because non-isolated eigenvalues with negative real part can be outside the unit circle for small 's. Observing that the intersection (with negative real part) of the asymptotic spectrum independent of the boundary condition—given by <Ref>—with the real axis, which is indeed the last place entering the disk, is given by -√(-1), we consider = -√(-1). Thus, if ϵ, which can practically be estimated using either (<ref>) or (<ref>), is such that ϵ > 0, we can infer that we are in the stable situation. On the other hand, if ϵ < 0, we can face instability if the amplitude of ϵ is too large. In particular, we look for ϵ < √( - 1) - 1 = ϵ_lim≤ 0. Of course, this is just a rough estimation by one iteration of the Newton's method from an educated asymptotically-inspired guess . Still, the results from <Ref> corroborate empirical observations, showing that while we expect (<ref>) and (<ref>) be stable essentially for every , the conditions (<ref>) for ≥ 2 become stable starting from a certain ∼ 100. To summarize, we see that the GKS-unexpected instability comes from the boundary condition but in an “indirect” way, namely not through the isolated asymptotic eigenvalue but via an eventually boundary-independent cluster, which still feels the effect of the boundary condition as long as remains small. * > 0. We focus on the case = 1/2, = 1.6, and = 30, where—unexpectedly—stable behaviors have been observed. In <Ref>, all plots feature bulges in the pseudo-spectra, protruding outside the unit circle close to = 1. This indicates that all boundary conditions are GKS-unstable. For the empirically-stable cases (<ref>) and (<ref>), it could seem that = 1 be a simple eigenvalue of . This is almost true, for we obtain 0≠(60-60) ∼ -10^-21 for the former and 0≠(60-60) ∼ 10^-21 for the latter boundary condition. Both these determinants being extremely tiny, one must be aware when evaluating things in floating point arithmetic. From <Ref>, we see that we obtain positive ϵ∼ 10^-14 for the former and negative ϵ∼ -10^-14 for the latter using (<ref>). Though these predictions need not be extremely accurate, they suggest that (<ref>) is theoretically unstable, whereas (<ref>) is stable. Practically, this isolated eigenvalue dwells so close to = 1 that we face stable simulations. Moreover, we see that ϵ goes to zero exponentially with . For (<ref>), <Ref> features two isolated complex conjugate eigenvalues close to the left cluster, which are nothing but (almost) the two complex conjugate roots of (<ref>). The situation is radically different for (<ref>) with ≥ 2. Here, instabilities show up with growth ∝^ - 1. Looking at <Ref>, we see isolated eigenvalues close to the target = 1, some of which are outside the unit disk. They should theoretically generate exponential growth; however, they are so close to = 1 that their effective role is the one of a multiple eigenvalue on the unit circle with multiplicity , explaining the growth ∝^ - 1. This could have been predicted without plotting spectra, taking advantage of <Ref>. Formal computations show that for > 0 and ∈ (0, 2), we have the reflection coefficients () ∼ ( - 1)^- + ( - 1)^-+1, for (<ref>), () ∼ ( - 1)^-1 + 1, for (<ref>), close to = 1, where coefficients have been neglected for compactness. A second explanation comes from observing the pseudo-spectra bulges around = 1. In fact, the larger the number of almost-coinciding eigenvalues close to , the “more singular” the resolvent (2 - )^-1 shall be in this neighborhood, due to the almost-superimposed peaks associated to each eigenvalue. Thus, for a given level-set, the bulge around will be larger. This can be also intuitively understood by looking at <cit.>. What this way of reasoning is saying is that when is small, there is no clear decoupling between inner and boundary scheme in terms of spectrum, and the choice of boundary condition has a huge impact on the rest of the spectrum. The larger the extrapolation stencil, the later we converge to the asymptotic spectrum. If we converge lately to this spectrum featuring bulges, we can expect instability as a low-dimensional effect in . § CONCLUSIONS AND PERSPECTIVES We have introduced boundary conditions for a two-velocities monodimensional scheme. Rewriting things in terms of schemes on the variable of interest, we have seen that GKS-stable boundary conditions decrease the order of second-order accurate bulk schemes when initializing at equilibrium, whereas unstable boundary conditions do not. However, suitable boundary source terms bring back GKS-stable boundary conditions to second-order overall accuracy. GKS analyses are not meaningful on coarse meshes, where we have replaced them with the inspection of matrix spectra and pseudo-spectra. The former approach does not work in this framework because matrix spectra do not feature a clear separation between bulk and boundary (isolated) eigenvalues. Finally, we have linked—to the best of our knowledge for the first time—the order of poles of the reflection coefficient with the number of eigenvalues in the scheme matrix tending to isolated points in the limit spectrum. Two possible perspectives are envisioned at this stage. The first one is to extend the work to more involved schemes, such as three-velocities schemes and schemes with an arbitrary number of discrete velocities having a two-relaxation-times (TRT) link-structure with “magic parameters” equal to 1/4. We hope being able to turn these schemes into ones. The second and more ambitious path is to develop a GKS theory directly on schemes without having to transform them. To this end, the modal ansatz must carefully identify unstable modes linked solely with non-conserved moments, which are not interesting, and filter them while considering the conserved moment. § ACKNOWLEDGEMENTS The author thanks Victor Michel-Dansac for the useful advice on the manuscript. This work of the Interdisciplinary Thematic Institute IRMIA++, as part of the ITI 2021-2028 program of the University of Strasbourg, CNRS and Inserm, was supported by IdEx Unistra (ANR-10-IDEX-0002), and by SFRI-STRAT’US project (ANR-20-SFRI-0012) under the framework of the French Investments for the Future Program. alpha § PROOFS §.§ Proof of <Ref> The bulk schemes in (<ref>) and (<ref>) can be found as described in <cit.>, except for the one in (<ref>) for = 1, - 2. In this case, the scheme might be different due to the effect of the boundary condition. This will be verified in the proof of <Ref>. We now check = - 2. The boundary scheme on the distribution functions reads ^+, + 1_ - 1 = ^+, _ - 2 = 12_ - 2^ + 1-2_ - 2^ + 2(_ - 2^), _ - 1^-, + 1 = - _ - 2^+, + ( + 1) = -12_ - 2^ - 1-2_ - 2^ - 2(_ - 2^) + ( + 1). Taking sum (<ref>) + (<ref>) and difference (<ref>) - (<ref>) of these equations yields _ - 1^ + 1 = ( + 1), _ - 1^ + 1 = _ - 2^ + (1-) _ - 2^ + (_ - 2^) - ( + 1). The bulk scheme written on the moments reads, for ∈1 - 2 _^ + 1 = 12 (_-1^ + _+1^) + 1-2 (_-1^ - _+1^) + 2 ((_-1^) - (_+1^)), _^ + 1 = 2 (_-1^ - _+1^) + 1-2 (_-1^ + _+1^) + 2 ((_-1^) + (_+1^)). Writing (<ref>) at = - 2, we have to eliminate the term in _ - 3^ - _ - 1^. Using (<ref>) and (<ref>) gives _ - 3^ - _ - 1^ = 2 (_-4^ - 1 - 3_-2^ - 1) + 1-2 (_-4^ - 1 - _-2^ - 1) + 2 ((_-4^ - 1) - (_-2^ - 1)) + (). There is still the unknown on the right-hand side. However, this term can be obtained using (<ref>) written for = - 3, giving 1-2 (_ - 4^ - 1 - _ - 2^ - 1) = _ - 3^ - 2 (_ - 4^ - 1 + _ - 2^ - 1) - 2 ((_ - 4^ - 1) - (_ - 2^ - 1)), which thus gives _ - 3^ - _ - 1^ = _ - 3^ -2 _-2^ - 1+ () = _ - 3^ -2 _-2^ - 1+ _-1^, using (<ref>) in the last inequality. Back into (<ref>) at = - 2, this gives (<ref>) as claimed. §.§ Proof of <Ref> For the initial scheme, hence the modified equations at time = 0, we replace the discrete solution in (<ref>) with a smooth function and, expanding around (, ) = (0, 0), we obtain (omitting the argument) 12 ( 1 - ∑_ = 0^ - 1_ )_=0 - 12 ( ∑_ = 0^ - 1_ -1 )_=0() + ∂_ - 2 ( ∑_ = 0^ - 1_ + 1 ) ∂_ - 2 ( ∑_ = 0^ - 1_ - 1 )∂_() = ^2. Therefore, we obtain ∂_ - 2 ( ∑_ = 0^ - 1_ + 1 ) ∂_ - 12 ( ∑_ = 0^ - 1_ - 1 )∂_() =, where one can easily show—by induction—that ∑_ = 0^ - 1_ = -χ_≥ 2, thus giving the expected result. We now find the modified equations at (, 0) for > 0. The case = 1 is handled using (<ref>) and we do not detail the computation. For the case ≥ 2, we use (<ref>). Putting everything on the left-hand side, we obtain (I)× + (II)×() + ×(III)×∂_ + ×(IV)×∂_ + ×(V)×∂_() = ^2, where we now deal with each of the terms. For the first one: (I) = 12 ( ∑_ = 0 ≠ 1^ - 1 (_ + (1-)_)+ (_1 + 1 + (1-)_1) ) + - 12 ((_0 + _0 ) + (_1 + _1 - 1) + ∑_ = 3^ (_ - 1 + _ - 1) )- 1 = 12 ( ∑_ = 0 ^ - 1_ + (1-)∑_ = 0 ^ - 1_ + 1 ) + - 12 ( ∑_ = 0 ^ - 1_ + ∑_ = 0 ^ - 1_ - 1 )- 1 =2 ( ∑_ = 0 ^ - 1_ - 1 ) = 0. For the second one: (II) = 2 ( ∑_ = 0 ^ - 1_ - 1 ) = 0. For the third one: (III) = 1-2 ((_0 + _0 ) + (_1 + _1 - 1) + ∑_ = 3^ (_ - 1 + _ - 1) ) - 1 =1-2 ( ∑_ = 0 ^ - 1_ + ∑_ = 0 ^ - 1__=2 - 1 ) - 1 = - For the fourth one: (IV) = 12 ( ∑_ = 0^ - 1(_ + (1-)_) + 1 ) + - 12 ((_0 + _0 ) +2 (_1 + _1 - 1) + ∑_ = 3^(_ - 1 + _ - 1) ) = 12 ( ∑_ = 0^ - 1__=-1 + (1-) ∑_ = 0^ - 1_ + 1 ) + - 12 (∑_ = 0^ - 1 (+1) __=0 + ∑_ = 0^ - 1 ( +1)_ - 2 ). Using the fact that ∑_ = 0^ - 1_ = 2χ_≤ 2 and ∑_ = 0^ - 1_ = 2 gives (IV) = 0. Finally (V) = 2 ( ∑_ = 0^ - 1_ - 1 ) = -. Putting all these facts together into (<ref>) and dividing by - yields the desired result. §.§ Proof of <Ref> Let us consider that = + 1, where = (2 - 11). Using the Sherman-Morrison formula under the assumption that 2 - be invertible turns (<ref>) into ϵ≈ -(2 - )/((2 - ))+((2 - )1)· ((2 - ))/(2 - ) - · ((2 - )1). Let us prove the case of = -1. The other is done analogously. Computations start from (<ref>). First of all, we compute (-2 - ) = ^ - 2, if  is even, 0, if  is odd. Let us start with even, because this ensures -2 - be invertible, which simplifies the following way of reasoning. Moreover, one can see that ( (-2 - )) = -( + 2)^-2. We observe that (-2 - )1 = (-^-2, 0, …, -^-2, 0^ elements, ^-2, 0, …, ^-2, 0), 1(-2 - ) = (-^-2, 0, …, 0), 2(-2 - ) = (0, 0, -^-3, 0, …, -^-3, 0, -^-3, -^-2_ elements, 0, 0, ^-3, 0, …, ^-3, 0). It is time to employ the particular boundary condition at hand, for which = ( + 1, -, 0, …, 0), so that the previous computations are enough to solve the problem. We obtain (-2 - ) = (-(+1)^ - 2, 0, ^-2, 0, …, ^-2, 0, ^-2, ^-1_ elements, 0, 0, -^-2, 0, …, -^-2, 0). Hence we obtain ((-2 - )1)· ((-2 - )) = (-+3)^2 - 4, and · ((-2 - )1) = - (+1)^ - 2. All together into (<ref>) gives ϵ≈ -^ - 2/-( + 2)^-2 + (-+3)^2 - 4/^ - 2 + (+1)^ - 2, giving the desired result. For odd, we have to repeat similar computations, simply changing ↦ - 12 and posing = ( + 1, - + 1, 0, …). § EXPRESSION OF SCHEME MATRIX FOR THE ORIGINAL SCHEME The blocks in are made up by ++ = 12(2-+) [ _0 _1 ⋯ ⋯; 1 0 ; ⋱ ⋱ ; 1 0 ], +- = 2(1-) [ _0 _1 ⋯ ⋯; 1 0 ; ⋱ ⋱ ; 1 0 ], -+ = 2(1+) [ 0 1 ; ⋱ ⋱ ; 0 1; -1 0 ], – = 12(2--) [ 0 1 ; ⋱ ⋱ ; 0 1; -1 0 ], in the case where we use (<ref>).
http://arxiv.org/abs/2407.02089v1
20240702092558
GPTCast: a weather language model for precipitation nowcasting
[ "Gabriele Franch", "Elena Tomasi", "Rishabh Wanjari", "Virginia Poli", "Chiara Cardinali", "Pier Paolo Alberoni", "Marco Cristoforetti" ]
cs.LG
[ "cs.LG", "physics.ao-ph" ]
Joint-Dataset Learning and Cross-Consistent Regularization for Text-to-Motion Retrieval Tomáš Rebok July 8, 2024 ======================================================================================= § ABSTRACT This work introduces , a generative deep-learning method for ensemble nowcast of radar-based precipitation, inspired by advancements in large language models (LLMs). We employ a GPT model as a forecaster to learn spatiotemporal precipitation dynamics using tokenized radar images. The tokenizer is based on a Quantized Variational Autoencoder featuring a novel reconstruction loss tailored for the skewed distribution of precipitation that promotes faithful reconstruction of high rainfall rates. The approach produces realistic ensemble forecasts and provides probabilistic outputs with accurate uncertainty estimation. The model is trained without resorting to randomness, all variability is learned solely from the data and exposed by model at inference for ensemble generation. We train and test using a 6-year radar dataset over the Emilia-Romagna region in Northern Italy, showing superior results compared to state-of-the-art ensemble extrapolation methods. [breakable,colback=white,colframe=cyan,width=] TLDR for the Computer Scientist / AI researcher We repurpose the + GPT<cit.> combo into a precipitation nowcasting model with novel contributions: * We introduce a new reconstruction loss (Magnitude Weighted Absolute Error) in that focuses on higher rain rates and improves reconstruction, convergence and stability of the training. * The GPT model is trained on token sequences representing a spatiotemporal context (H x W x Time). We test different context sizes to analyze the difference in performances for the nowcasting task. [breakable,colback=white,colframe=red,width=] TLDR for the Atmospheric Scientist * By following the recent trend of applying advanced AI methods to weather forecasts, we adapt a Large language model (LLM) architecture to the task of radar precipitation nowcasting. * We create a discrete representation (tokens) of radar precipitation maps and train an LLM with sequences of such tokens. We show that this setup can produce reliable and realistic ensemble forecasts compared to state-of-the-art ensemble Lagrangian extrapolation models (pySTEPS LINDA). § INTRODUCTION AND PRIOR WORK Nowcasting —short-term forecasting up to 6 hours— of precipitation is a crucial tool for mitigating water-related hazards<cit.>. Sudden precipitation can result in landslides and floods, frequently compounded by strong winds, lightning, and hailstorms, which can seriously jeopardize human safety and damage infrastructure. The foundation of very short-term (up to two hours) precipitation nowcasting systems is the application of extrapolation techniques to weather radar reflectivity sequences<cit.> that ingest current and n previous observations T_-n,…,T_-1,T_0 with the aim to extrapolate m future time steps T_1,T_2,…,T_m. These short-term precipitation forecasts are essential for emergency response when timely released and properly communicated via early warning systems<cit.>. The main contender to extrapolation techniques are numerical weather prediction (NWP) models, which can be used to forecast the probability and estimate the intensity of precipitation across large regions, but their accuracy is limited at smaller geographical and temporal scales<cit.>. Convective precipitation, which produces high rainfall rates and small cells, is especially difficult to forecast correctly for NWP models<cit.>. For these reasons, operational weather agencies recognize the great value offered by short-term extrapolation forecasts and make heavy use of statistical and, more recently, data-driven models that utilize the most recent weather radar observations for nowcasting<cit.>. Lagrangian extrapolation is the most well-known method for nowcasting precipitation<cit.>. It generates motion vectors to forecast the future direction of precipitation systems by applying optical-flow algorithms to a series of radar-derived rain fields. However, this approach becomes less accurate for increasing lead time, particularly in convective situations where precipitation could increase or decrease quickly. Several alternative techniques have been studied to overcome these constraints, like the seamless integration between nowcasting and NWP forecasts<cit.> and the integration of orography data<cit.>. Other, more sophisticated nowcasting methods improve the Lagrangian approach by generating ensemble nowcasts and preserving the precipitation field's structure. These sets of multiple forecasts aid in the assessment of forecast uncertainty by presenting multiple future scenarios. The most widespread example of this approach is the Short-Term Ensemble Prediction System (STEPS)<cit.>. The most recent advancements in nowcasting precipitation have seen the application of data-driven methods and, more prominently, of Deep Neural Networks (DNNs) and Generative AI techniques to enhance forecast accuracy and realism. Deterministic DNNs have been instrumental in predicting the dynamics of precipitation, including its development and dissipation, overcoming one of the major shortcomings of extrapolation methods<cit.>. However, deterministic models tend to produce less precise forecasts over time due to increasing uncertainty that manifests as a forecast field that smooths progressively with the lead time. Similarly to Lagrangian extrapolation, to overcome this limitation, ensemble deep learning methods have been introduced. Generative methods have significantly improved the generation of realistic precipitation fields beyond deterministic average predictions. The forefront of this technology is embodied in models that employ techniques such as Generative Adversarial Networks (GANs)<cit.>, that enable more accurate and detailed precipitation forecasts by learning to mimic real weather patterns closely, and more recently by Latent Diffusion models<cit.>, that can not only generate realistic rainfall forecasts but also produce reliable ensembles that can provide accurate uncertainty quantification of future scenarios. Many of these techniques were originally born in the field of computer vision and subsequently adapted to the weather forecasting domain with resounding success<cit.>. In this study, we take inspiration from the successful trend of applying Large Language Models (LLMs) architectures<cit.> born in the field of Natural Language Processing (NLP) to other disciplines<cit.>, including medium range weather forecasting domain<cit.>, intending to transfer this knowledge to the nowcasting domain. To do so, in our work, we follow a strategy that mimics the setup of natural language processing: a tokenization step, where an input tokenizer splits and maps the input to a finite vocabulary, and an autoregressive model trained on the tokens produced by the tokenizer. We show that such an approach produces realistic and reliable ensemble forecasts. Given the different characteristics of our input data compared to LLMs (i.e., spatiotemporal precipitation fields vs. texts or images), our adaptation introduces several novel contributions instrumental to our task. § MODEL ARCHITECTURE There are two main components of our approach, which we call : * Spatial tokenizer: An image compression and discretization model that learns to map patches of the radar image from/to a finite number of possible representations (tokens). The learned codebook of tokens can be used to express a compact representation of any precipitation field. The tokenizer thus has a dual role: learning how to compress and decompress the information in the input image and how to discretize the compressed information (i.e., learn an optimal codebook). * Spatiotemporal forecaster: A model trained on token sequences to causally learn the evolutionary dynamics of precipitation over space and time. Given a tokenized spatiotemporal context (a compressed precipitation sequence), the model outputs probabilities over the codebook for the next expected token for the context. The output probabilities can be leveraged for ensemble generation. The two components of the model are trained independently in cascade, starting with the tokenizer. The choice of this dual-stage architecture unlocks a number of desirable properties that are instrumental in meeting many requirements of operational meteorological services when adopting a nowcasting system. The two most important characteristics are realistic ensemble generation and accurate uncertainty estimation. Our architecture provides both realistic ensemble generation capabilities and probabilistic output at the spatiotemporal (token) level. Another notable feature of is its fully deterministic architecture, eliminating the need for random inputs during training or inference. This ensures that all model variability is derived solely from the training data distribution. By learning a discretized representation in the tokenizer, the forecaster can output a categorical distribution over vocabulary, modeling a conditional distribution over possible data values. This approach, unlike continuous variable regression, inherently enables probabilistic outputs. In contrast, all other generative deep learning models<cit.> require random input during training and inference to promote output variability and generate ensemble members. The baseline architecture of is an adaptation of the work of , which we repurposed from the task of image generation to the task of precipitation nowcasting by introducing two key modifications: * In the spatial tokenizer () model, we replace the standard reconstruction loss (MAE) with a specific loss that helps improve the reconstruction of precipitation patterns (Magnitude Weighted Absolute Error, MWAE). Moreover, the new loss also shows a promotion of the token utilization rate, where we achieve 100% codebook utilization. * The token sequences used to train the GPT model represent a fixed three-dimensional context of time x height x width of precipitation patterns. This allows the model to learn spatiotemporal dynamics of the evolution of radar sequences. We describe the details of the model setup and novel contributions in the following subsections. §.§ Spatial tokenizer: The spatial tokenizer is a Variational Quantized Autoencoder featuring an adversarial loss ()<cit.> and a novel reconstruction loss specifically tailored to improve the reconstruction of precipitation. We carefully tune the architecture of the to obtain a model that provides the highest possible compression, while maintaining a good reconstruction performance and computational complexity. The architecture of the tokenizer is visually summarized in Figure <ref>. The encoder (E) and decoder (G) of the autoencoder are symmetric in design and formed mainly by convolutional blocks, with α = 4 steps of downsampling and upsampling, respectively. With this setup, each latent vector at the bottleneck summarises a patch of 2^α=2^4=16x16 pixels of the input image. Following recent studies<cit.>, we find useful to set a number of channels at the bottleneck (i.e., the length of the latent vector) of 8 to obtain efficient utilization of the codebook, good training stability and the effective capture of essential features in a reduced-dimensional space. The latent vectors at the bottleneck are discretized using a quantization layer that maps them to a finite codebook (Z) by finding the closest vector in the codebook. We define a codebook size of 1024 tokens in the quantization layer. The codebook vectors are initialized randomly and then learned during training. As an example, with an input precipitation map of 192x192 pixels with a dynamic range of 601 possible values for each pixel (from 0 to 60dBZ with a 0.1dBZ step, as described later in Table <ref>), the resulting feature vector at the bottleneck will have a dimensionality of 12H x 12W x 8 channels. Each 8-channel vector is then mapped to one of the possible 1024 vectors in the codebook, resulting in a compressed and discretized representation of 12H x 12W with a dynamic range of 1024 values. The resulting total compression ratio of the spatial tokenizer is 192 · 192 · 601/12 · 12 · 1024≈ 150 times. To support such a high compression ratio while maintaining good reconstruction ability, especially for the extreme values, we developed a novel reconstruction loss that we use in place of the commonly used reconstruction losses (l_1 or l_2, a.k.a. Mean Absolute Error or Mean Squared Error), defined with the following equation (<ref>): MWAE(𝐱, 𝐲) = ∑_i=1^n| σ(x_i) - σ(y_i) | ·σ(x_i) where σ is the sigmoid function σ(z) = 1/1 + e^-z and x and y are the input and output vectors of the autoencoder, respectively. We call this loss Magnitude Weighted Absolute Error (MWAE). By giving more weight to pixels with higher rain rates (magnitude), the loss simultaneously serves two purposes: the first is to nudge the tokenizer towards reserving more learning capacity for the reconstruction of extremes, and the other is to try to rebalance the notoriously skewed distribution of precipitation data, that by nature leans towards low rain rates. We tried different formulations of the loss and found that the introduction of the σ function over the inputs improved model convergence, training stability, and codebook usage. The interactions between loss terms during training follow the original implementation<cit.>. The total size of the model is 90M trainable parameters. §.§ Spatiotemporal forecaster: GPT Similarly to the second-stage model is a causal transformer, for our use case we choose a vanilla GPT-2 architecture with 304M parameters. We train two configurations, one with a spatiotemporal context size of 8 timesteps (40 minutes) x 256 x 256 pixels and a second configuration with 8 timesteps x 128 x 128 pixels. At the token level the two configurations amount to a context length of 2048 (8 x 16 x 16 tokens) and 512 (8 x 8 x 8 tokens) respectively. We refer to the two models as and respectively. In a GPT-like Transformer model, the context size (or sequence length) does not affect the number of parameters, instead, it influences the computational complexity and memory requirements of the model during training and (more crucially) inference. For these reasons, careful considerations in balancing computational complexity and model performance should be made, since timely forecasts are crucial for nowcasting. A summary of the two GPT models' settings is reported in Table <ref>. The training process of the forecaster is schematized in Figure <ref>: contiguous spatiotemporal sequences of radar data are retrieved from the training dataset, and encoded into codebook indices through the frozen encoder and passed to the GPT model as training samples. The indices are ordered starting with the oldest image using a row-first format. The ordering is instrumental to the nowcasting task: in inference, we can provide the model a context that is pre-filled with the past 7 time steps to generate the tokens for the 8th time step. We can generate forecasts for domains with arbitrary sizes by applying a sliding window approach, where we slide the context size across our forecasting domain to predict a target token in the larger domain (starting with the token at the top left position). At inference time the two models are combined in a sandwich-like configuration, with the encoding of the context input images through the encoder, the autoregressive generation of the indices of multiple forecasts steps via the transformer model, and the final decoding of the tokens back to pixel space using the decoder (see Figure <ref>). To obtain multiple ensemble members, the autoregressive generation of the indices can be repeated multiple times while applying a multinomial draw over the output probabilities to pick different tokens. § DATASET The dataset we propose for the study is the radar reflectivity composite produced by the HydroMeteorological Service of the Regional Agency for the Environment and Energy of Emilia-Romagna Region in Northern Italy (Arpae Emilia-Romagna). The agency operates two Dual-polarization C-Band radars in the area of the Po Valley, located respectively in Gattatico (44°47'27"N, 10°29'54"E) and San Pietro Capofiume (44°39'19"N, 11°37'23"E). The scanning strategy allows coverage of the entire Region every 5 minutes. The area is characterized by a complex morphology and it spans from the flat basin of the Po valley in the north to the upper Apennines in the south, and from the Ligurian coast in the west to the Adriatic Sea in the east. For the purpose of this work, scans with a radius of 125 km were chosen with a total coverage of 71172 square km, summarized in Figure <ref>. Arpae fully manages both the radar acquisition strategy and the data processing pipeline. They include several stages of data quality control and error correction developed to reduce the effect of topographical beam blockage, ground clutter, and anomalous propagations <cit.>. Specific corrections are applied over the vertical reflectivity profile to improve precipitation estimates at the ground level<cit.>. The resulting product used for this study is a 2D reflectivity composite map on a 290 x 373 km grid at 1km resolution per pixel, with a time step of 5 minutes. Reflectivity values range from -20dBZ to 60 dBZ. When converting reflectivity values to rain-rate (mm/h) the standard Marshall-Palmer Z-R relationship with a = 200 and b = 1.6 is applied <cit.>. §.§ Data selection, preprocessing and augmentation For the purpose of our study, we extract all contiguous precipitating sequences in the 6 years between 2015 and 2020. Non-precipitating sequences are discarded, resulting in the selection of 179,264 timesteps out of 630,720 (71,5% of the data is discarded). The precipitating sequences are divided between training, validation, and test sets. We prepare two test sets, one for the testing of the spatial tokenizer and one for the testing of the forecaster. To test the spatial tokenizer we isolate all time steps belonging to the days in the years 2019 and 2020 where extreme events happened by analyzing historical weather reports, resulting in a total of 21,871 radar images (time steps). We call this the Tokenizer Test Set (TTS). To test the forecaster we follow the same validation approach of , and we extract out of the Tokenizer Test Set 10 sequences of 12 hours each representative of the most relevant events. This 120 hours subset, namely the Forecaster Test Set (FTS), is used for the testing of the forecaster. The remaining sequences are randomly divided between training and validation, with the following final result: 149,524 steps for training, 7,869 for validation, 21,871 for the TTS that includes 1450 steps (12 hours * 10 events) of the FTS. To further increase the training dataset size and promote generalization we apply random cropping, random 90-degree rotation and flipping to the training dataset during the training phase. The data values are preprocessed by clipping the reflectivity range between 0 and 60 dBZ to minimize the contribution of spurious echoes and drizzle, and by rounding the values to the first decimal digit, resulting in an effective dynamic range of 601 values (from 0 to 60 with a 0.1 step) per pixel. Table <ref> summarizes the resulting dataset characteristics. § RESULTS We analyze the performances of our model at two stages: first, we analyze the amount of information loss introduced by the data compression in the tokenizer, and then we analyze the performance of as a whole for the nowcasting of precipitation up to two hours in the future. §.§ Spatial tokenizer reconstruction performances Given the high compression ratio that we introduce in the it is crucial to understand how much and what type of information is lost during the compression and discretization step operated by the tokenizer. Depending on the nature of the information loss, certain phenomena may be completely lost and this can compromise the ability of the transformer to learn and forecast some precipitation dynamics (e.g. extreme events). The new MWAE loss introduced in Section <ref> is specifically built to improve the reconstruction performances of the tokenizer and reach a good level of data reconstruction while maintaining a high compression factor. Table <ref> shows the performances in reconstruction ability on the TTS between a trained using as reconstruction loss a standard Mean Absolute Error (MAE) and using our proposed MWAE loss. We consider both global regression scores like Mean Absolute Error (MAE), Mean Squared Error (MSE) and the Structural Similarity Index Measure (SSIM) along with categorical scores computed by thresholding the precipitation at multiple rain rates (1, 10 and 50 mm/h), like the Critical Success Index (CSI) and the frequency bias (BIAS). The autoencoder trained with MWAE shows significant improvements over all the considered metrics, but it is crucial to notice that the improvements are more pronounced for higher rain rates, whose frequency is almost precisely reconstructed by the autoencoder. This is clearly visible in the improvements in BIAS at 50mm/h, which is defined as the fraction between the number of pixels in the input image over 50 mm/h and the number of pixels that surpass the same threshold in the reconstruction, where we obtain a jump in performance from 0.22 to 0.92 (where 0 is total underestimation, 1 is the perfect score, and greater than 1 is overestimation). The recovery in frequency is also confirmed by analyzing the radially averaged power spectral density (i.e., the amount of energy) of the input and reconstruction: as shown in Figure <ref>, the average power spectra of the MWAE autoencoder closely resembles the input (albeit with an overestimation at the smallest wavelengths), while the standard autoencoder distribution is constantly shifted and underestimated at all wavelengths. Improvement in CSI score is also significant (at 50 mm/h, more than three times higher), albeit not as thorough as the frequency recovery. This implies that the remaining source of error is that the reconstructed precipitation fields have either a different structure or a different location when compared to the input (i.e., the amounts of the reconstructed precipitation are correct but misplaced at the spatial level). To better characterize this remaining source of error, we compute the SAL measure <cit.>, which evaluates three key aspects of the precipitation field within a specified domain: structure (S), amplitude (A), and location (L). The amplitude component (A) measures the relative deviation of the domain-averaged reconstructed precipitation amount from the input. Positive values indicate an overestimation of total precipitation, while negative values indicate an underestimation. The structure component (S) assesses the shape and size of predicted precipitation areas. Positive values occur when these areas are too large or too flat, while negative values indicate that they are too small or too peaked. The location component (L) evaluates the accuracy of the predicted location of precipitation. It combines information about the displacement of the reconstructed precipitation field’s center of mass compared to the input and the error in the weighted average distance of the precipitation objects from the center of the total field. Perfect forecasts result in zero values for all three components, indicating no deviation between input and reconstructed precipitation patterns. The SAL analysis plot for both autoencoders is shown in Figure <ref>. The MWAE autoencoder improves over the baseline autoencoder on all scores, with a median value that is close to zero for all three components. A residual source of absolute error remains in the Structure component, while both Amplitude and Location errors are negligible. In summary, divergences in the size and shape of the reconstructed precipitation patterns account for the majority of the error for our new autoencoder, while the locations, frequencies, and energy contents of the precipitation patches are mostly accurate. Overall, this is a good compromise for the nowcasting task since we can tolerate higher compromises for errors in structure, whereas systematic errors in amplitude, frequency, or location can seriously impair the forecaster's ability to accurately predict the evolutionary dynamics of precipitation. Some qualitative examples of the input and reconstruction from both autoencoders are presented in Figure <ref>. §.§ Nowcasting performances We examine and compare forecasting performance with that of the Lagrangian INtegro-Difference equation model with Autoregression (LINDA)<cit.>, the state-of-the-art ensemble nowcasting model included in the pySTEPS package<cit.>. LINDA is a nowcasting technique intended to provide superior forecast skill in situations with intense localized rainfall compared to other extrapolation methods (S-PROG or STEPS). Extrapolation, S-PROG<cit.>, STEPS<cit.>, ANVIL<cit.>, integro-difference equation (IDE), and cell tracking techniques<cit.> are all combined in this model. For the comparison, we use the FTS. Out of the 10 events in FTS, 7 are convective events occurring in spring or summer, and three are winter precipitation events. For each event, we produce a forecast every 30 minutes, and each forecast is a 20-member ensemble forecast with 5-minute time steps and a maximum lead time of 2 hours (i.e., 24 forecasting steps) for both LINDA and . This results in a total of 200 forecasts (20 forecasts per event) generated per model. For we test both the two model configurations, and . For verification assessment, we rely on the Continuous Ranked Probability Score (CRPS) and the rank histogram, which are essential tools for verifying ensemble forecasts. By showing the frequency of observed values among the forecast ranks, the rank histogram evaluates the dispersion and reliability of ensemble forecasts and highlights biases such as under- or over-dispersion. By comparing the prediction's cumulative distribution function to the actual value, CRPS calculates a numerical score for forecast skill that indicates how accurate a probabilistic forecast is. The two scores complement each other, with the CRPS providing a measure of forecast accuracy as a whole and the rank histogram emphasizing the ensemble spread and reliability. The CRPS score for each of the three models—LINDA, , and —is displayed in Figure <ref>: both variants of outperform LINDA across all lead times, with outperforming all other models. This result clearly shows that the model can learn a more thorough dynamic of the evolution of precipitation patterns when the context size is more spatially extended. It is important to notice that this improvement comes with a non-negligible increase in terms of computational time at inference, which in our experiments was close to an order of magnitude ( computes a timestep in 2 seconds compared to 17 seconds for the larger model on an NVIDIA RTX 4090). Figure <ref> analyzes the rank histogram at different lead times for all three models, including information on the Kullback–Leibler divergence (KL) from the uniform distribution. Both versions of provide a better overall score over LINDA that tends to be under-dispersed, with being the best model (Figure <ref>). Moreover, shows a rank distribution close to optimal up to the first hour, with a KL divergence from the uniform distribution of 0.006 at 60 minutes lead time (12 steps). displays an overall better rank histogram than LINDA up to the first 60 minutes with a tendency to underestimation that compounds over time: we attribute this behavior to the increased ability of the to capture the training distribution, that has a higher ratio of dissipating precipitation events than the FTS (which is filtered to contain only extreme events). Figure <ref> shows an example of nowcast for a convective case in the FTS, with two ensemble members and the ensemble mean for both LINDA and . generates two realistic and diverse forecasts, with an ensemble mean that features a better location accuracy than LINDA compared to the observations. § DISCUSSION AND FUTURE WORK introduces a novel approach to ensemble nowcasting of radar-based precipitation, leveraging a GPT model and a specialized spatial tokenizer to produce realistic and accurate ensemble forecasts. We show that this approach can provide reliable forecasts, outperforming the state-of-the-art extrapolation method in both accuracy and uncertainty estimation. 's deterministic architecture enhances interpretability and reliability by generating realistic ensemble forecasts without random noise inputs. The model can be declined in different sizes, both in context length and in terms of parameters (which we postponed to future analyses) allowing to balance the trade-off between accuracy and computational demands and providing flexibility for different operational settings. We believe that our method, by adopting an architecture influenced by large language models (LLMs), paves the way for future promising research in precipitation nowcasting that can incorporate all the improvements and developments from the quickly developing field of LLM research. This includes more efficient architectures, improved training techniques, and better interpretability tools. Such integration can potentially enhance 's performance, scalability, and usability, ensuring that it remains a state-of-the-art nowcasting tool. Despite its strengths, the approach poses specific challenges that must be considered for the operational usage of the model. The approach requires training two models in cascade, each with its own set of challenges. In our experiments, it was hard to find a stable configuration to train the spatial tokenizer that has to balance multiple competing losses. The MWAE reconstruction loss we introduced helped substantially in terms of both convergence and stability, although at the cost of slower training induced by the smoothing effect of the sigmoid (σ) terms in the loss. On the other hand, we found the forecaster to be very stable in training (as expected by transformers) but computationally intensive in inference, especially for the long context configuration (), making its use in a real-time application like nowcasting challenging without significant resources. The ability of the model to effectively capture the training distribution is both its main strength and point of attention. From an operational perspective, our hypothesis is that, due to the distinct distribution of stratiform and convective precipitation, training separate models for stratiform (winter) and convective (summer) precipitation may result in better forecasts. This implies that a larger and better-quality dataset may be needed than the one used in this work to avoid model overfitting. Future work could explore optimizing context size and computational complexity to balance performance and resource demands, as well as integrating the vast literature about more efficient transformer architectures (e.g., flash attention, speculative decoding, etc...). We also plan to explore the interpretability of the model to control and condition the model for different tasks. The peculiar characteristics of open the possibility of guiding the generative process of the model by combining the probabilistic output of the forecaster with the interpretability of the learned codebook in terms of physical quantities. A possibility that we envision is to leverage for tasks like seamless forecasting (a.k.a. blending), generation of what-if scenarios, forecast conditioning, weather generation, and observation correction capabilities. unsrtnat
http://arxiv.org/abs/2407.02880v1
20240703075408
Knowledge Composition using Task Vectors with Learned Anisotropic Scaling
[ "Frederic Z. Zhang", "Paul Albert", "Cristian Rodriguez-Opazo", "Anton van den Hengel", "Ehsan Abbasnejad" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
Bispectrum from inflation/bouncing Universe in VCDM Ryo Namba =================================================== § ABSTRACT Pre-trained models produce strong generic representations that can be adapted via fine-tuning on specialised datasets. The learned weight difference relative to the pre-trained model, known as a task vector, characterises the direction and stride of fine-tuning that enables the model to capture these specialised representations. The significance of task vectors is such that simple arithmetic operations on them can be used to combine diverse representations from different domains. This paper builds on these properties of task vectors and aims to answer (1) whether components of task vectors, particularly parameter blocks, exhibit similar characteristics, and (2) how such blocks can be used to enhance knowledge composition and transfer. To this end, we introduce , an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level. We show that such linear combinations explicitly exploit the low intrinsic dimensionality of pre-trained models, with only a few coefficients being the learnable parameters. Furthermore, composition of parameter blocks enables modular learning that effectively leverages the already learned representations, thereby reducing the dependency on large amounts of data. We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives. In particular, we show that (1) learned anisotropic scaling allows task vectors to be more disentangled, causing less interference in composition; (2) task vector composition excels with scarce or no labelled data and is less prone to domain shift, thus leading to better generalisability; (3) mixing the most informative parameter blocks across different task vectors prior to training can reduce the memory footprint and improve the flexibility of knowledge transfer. Moreover, we show the potential of as a parameter-efficient fine-tuning method, particularly with less data, and demonstrate that it can be easily scaled up for higher performance. § INTRODUCTION One practical advantage of neural networks is the fact that knowledge learned from a previous problem, in the form of network weights, can be transferred to solve other related problems. Commonly referred to as transfer learning <cit.>, this technique is often applied when a model trained on a general-purpose dataset—ImageNet <cit.> for many years—is fine-tuned on other datasets to improve performance on downstream problems. In the past, classification models <cit.> have been used as the medium for such knowledge transfer, which played a crucial part in the success of detection and segmentation <cit.>. In recent years, foundation models <cit.> trained on broad data, CLIP <cit.> particularly, have demonstrated strong performance on a multitude of tasks, even when applied in a zero-shot manner. Besides the conventional way of exploiting the knowledge in these models via fine-tuning, recent works <cit.> have presented more direct measures to manipulate the network weights. In particular, Ilharco  <cit.> showed that, a task vector, defined as the weight difference between a pre-trained and a fine-tuned model, can be used as a carrier of the task-specific knowledge learned via fine-tuning. As such, multiple task vectors, when combined with simple arithmetic, can form a multi-task model that largely retains its performance across all fine-tuning tasks. Linearisation techniques <cit.>, in addition, have been shown to further enhance this compositionality. Intrigued by this phenomenon, we investigate the potential of task vectors being knowledge carriers in this paper, by learning linear combinations of them (Figure <ref>) for various problems. In particular, parameter blocks, weights and biases, tend to encode different learned representations in different layers. We thus learn an independent scaling coefficient per block for more precise adjustments tailored to the unique roles of each parameter block. This results in anisotropic scaling of task vectors (Figure <ref>), and allows us to exploit their modularity in knowledge composition, granting higher controllability when steering the behaviours of a model for task arithmetic <cit.>. The potential applications of task vector composition extend beyond model editing. With the coefficients being the only learnable parameters, our method exploits the rich knowledge encapsulated in the task vectors by searching in a low-dimensional coefficient space. As a result, it is a competitive parameter-efficient fine-tuning (PEFT) method, and is particularly effective in cases where labelled data is scarce. This offers new opportunities for few-shot learning <cit.> and test-time adaptation <cit.>. Furthermore, for multi-purpose models such as CLIP <cit.>, variants of the model trained with different data sources or fine-tuned on different downstream tasks are often available <cit.>. These resources constitute a significant knowledge bank, with task vectors being the knowledge carrier. Many learning problems may be simplified to learning a combination of task vectors. Our primary contribution is a learning algorithm named , wherein otherwise complex learning problems can be framed as learning linear combinations of task vectors. The algorithm is broadly applicable to optimising supervised and unsupervised objectives. Its effectiveness is demonstrated in task arithmetic, few-shot recognition, test-time adaptation and parameter-efficient fine-tuning, where we show that (1) learning linear combinations of task vectors directly exploits the low intrinsic dimensionality of pre-trained models <cit.>, resulting in a small number of learnable parameters; (2) standard task vectors, otherwise inferior to linearised variants <cit.> in task arithmetic, can produce stronger multi-task models with learned anisotropic scaling; (3) is effective in low-data regimes, and improves the accuracy of CLIP by 6.5 absolute points averaged over 22 datasets with unlabelled data; (4) is complementary to previous few-shot adaptation methods, in that one third of the examples it improves upon are unique; (5) as a few-shot learning method is less prone to domain shift, and achieves better generalisation on out-of-domain datasets; (6) the most informative parameter blocks from different task vectors can be mixed prior to training, allowing for flexible and efficient knowledge transfer under memory constraints; (7) is a strong PEFT method when data is limited, and existing PEFT methods such as low-rank adaptations (LoRA) <cit.> can be seamlessly integrated into to improve memory efficiency. § MODELS AND TASK VECTORS As Ilharco  <cit.> demonstrated, task vectors exhibit many intriguing properties across a wide range of models, such as CLIP <cit.>, GPT-2 <cit.> and T5-based models <cit.>. To facilitate more in-depth experimentation and analysis, we focus on the CLIP model in this paper, due to its wide availability and manageable size. In particular, we follow previous practice <cit.> and acquire task vectors by fine-tuning the image encoder, with the text representations frozen. This ensures that image encoders fine-tuned on different datasets produce features residing in the same representation space, through a common text encoder. The task vectors obtained from these fine-tuned encoders can thus be combined more effectively to form a unified multi-task model. Formally, denote the CLIP image encoder by f: ×Θ→, such that for input image ∈ and parameters ∈Θ, = f(; ) is the learned latent representation for the input image. Denote the weights of a pre-trained model by _0, and the weights of its fine-tuned variant by _i, i ∈^+, where i indexes a dataset _i. We follow Ilharco  <cit.> and define a task vector as _i = _i - _0. In addition, we investigate task vectors produced by linearised variants of the image encoder using the first-order Taylor expansion, g(; ) f(; _0) + ( - _0) ∇_ f(; _0). Ortiz-Jiménez  <cit.> showed that, task vectors obtained from fine-tuning the linearised variants have low disentanglement errors, and exhibit strong compositional properties. § LEARNING TASK VECTOR COMPOSITIONS Parameters in a neural network, depending on the depth of the layer, often have different significance. For instance, early layers in convolutional neural networks <cit.> are known for extracting generic, low-level features, such as edges, corners, ., while deeper layers produce features more specific to the task. We recognise the non-uniform impacts parameters at different layers can have, and do not perform isotropic scaling on task vectors. Instead, weights, biases and any other forms of parameterisation, which we collectively refer to as parameter blocks, will be scaled independently. §.§ Proposed method: Formally, denote a task vector with m parameter blocks by = (^(1), …, ^(m)), where each parameter block ^(j) is vectorised, and round brackets denote column vector concatenation. We learn a block diagonal matrix Λ, parameterised as Λ = [ λ^(1) I^(1) … 0; ⋮ ⋱ ⋮; 0 … λ^(m) I^(m); ], where λ^(j)∈ is a learnable coefficient; I^(j) denotes an identity matrix with its number of columns matching the dimension of ^(j); and the superscript j ∈^+ indexes a parameter block. This results in anisotropic scaling of a task vector, that is, Λ_i _i = (λ^(1)_i ^(1)_i, …, λ^(m)_i ^(m)_i ), where the subscript i ∈^+ indexes a task vector. As such, assuming a supervised objective, finding the optimal composition of task vectors can be defined as the following optimization problem _Λ_1, …, Λ_n (, ) ∈_t( f(; _0 + ∑_i=1^n Λ_i _i ), ), where is the loss function for a target task; n is the number of task vectors; is the labels corresponding to inputs ; _t denotes a target dataset. The number of learnable parameters, as a result, is precisely mn, Let us denote the solution to the aforementioned optimization problem by {Λ_i^⋆}_i=1^n. In inference, model f(, _0 + ∑_i=1^n Λ_i^⋆_i) will be deployed, which incurs no additional computational cost compared to models trained in the conventional way. In addition, we investigate the task vectors obtained from fine-tuning linearised variants of the model, g(x) in Eq. <ref>. Denote such task vectors by . The learning objective with linearised task vectors can be derived as follows _Λ_1, …, Λ_n (, ) ∈_t( f(; _0) + (∑_i=1^n Λ_i _i ) ∇_ f(; _0), ). §.§ Relation to intrinsic dimensionality A notable characteristic of is its parameter efficiency. To offer more intuitions, we refer to previous findings <cit.> that deep neural networks often produce solutions residing in a subspace with much lower intrinsic dimensionality. This is measured by finding a minimum number of d parameters, such that learning these parameters (θ̂∈^d) leads to approximately the same performance as optimising in the full parameter space (∈^D). This can be expressed as follows = _0 + P θ̂, where _0 ∈^D denotes the pre-trained weights and P ∈^D × d is a random projection matrix. We demonstrate that learning task vector compositions leads to the same formulation. For brevity of exposition, let us consider compositions at the block level. For the j-th parameter block, we have ^(j) = _0^(j) + ∑_i=1^nλ_i^(j)_i^(j) = _0^(j) + [ _1^(j), …, _n^(j)]_projection matrix[ λ_1^(j), …, λ_n^(j)] _learnable parameters. We draw a parallel between Eqs. <ref> and <ref> and note that explicitly exploits the low intrinsic dimensionality by learning a small set of coefficients. The number of task vectors, n, is much smaller than the dimension of weight vector _i^(j), and is analogous to the intrinsic dimensionality d. However, as opposed to using a random projection matrix P, constructs the projection matrix from task vectors, making use of the learned representations. To demonstrate its advantage, we use the same number of bases for task vectors[A fixed number of task vectors are selected based on the blockwise gradient. Details can be found in Section <ref> and Appendix <ref>.] and random bases[Each random basis of the projection is drawn from a Gaussian distribution with the mean and standard deviation to match those of the pre-trained weights in the corresponding parameter block, _0^(j).], and show that task vectors consistently achieve higher performance in Figure <ref>. These results solidify our understanding of task vectors being knowledge carriers. We thus set out to apply to various applications. § TASK ARITHMETIC Task arithmetic <cit.> is comprised of a few tasks aimed at editing pre-trained models using task vectors. Following previous practice <cit.>, we conduct experiments under the settings of task negation and task addition on eight image classification datasets (details included in Appendix <ref>). Previous works acquire the optimal isotropic scaling factor on task vectors via a hyper-parameter search on validation sets. As such, we learn anisotropic scaling matrices on the same validation sets, and visualise the learned coefficients to shed light on this mechanism. §.§ Task negation Task negation aims to reduce undesired biases, characterised by the performance, on a target task, while maintaining performance on a control dataset, ImageNet <cit.> in this case. Denote the validation sets for the target and control tasks by _t and _c, respectively. We perform a simultaneous gradient ascent on the target task and gradient descent on the control task, described as follows, _Λ_t(, ) ∈_t-( f(; _0 + Λ_t _t), ) + (, ) ∈_c( f(; _0 + Λ_t _t), ), where _t is the task vector for the target dataset, and cross-entropy loss is used. The learning objectives with linearised task vectors can be derived easily based on Eq. <ref>, and so are omitted. We summarise the task negation results in Table <ref>, and show that our method significantly improves upon standard task vectors, while the improvement upon linear task vectors is less prominent. In particular, we observe that weights matrices tend to have much larger negative coefficients, as shown in Figure <ref>. To investigate this, we instead only learn coefficients for the weight matrices, with zero coefficients on other parameter blocks, effectively reducing the number of learnable parameters by two thirds. With ViT-B/32 as the backbone, we observe an average accuracy of 20.14 (vs. 18.76) on target tasks and 61.23 (vs. 61.21) on the control task, which shows that weight matrices carry majority of the knowledge required for task negation. §.§ Task addition Task addition aims at producing a multi-task model using task vectors acquired from a range of datasets. We utilise task vectors from the eight image classification datasets, and learn the anisotropic scaling matrices with the objectives described in Eqs. <ref>, <ref> using the cross-entropy loss. The training data is comprised of the validation sets for all eight dataset, _t = ⋃_i=1^8_i. Performance comparison against previous methods is shown in Table <ref>, where our method yields substantial improvements. Interestingly, we note that with previous methods <cit.>, linear task vectors outperform the standard ones in terms of absolute accuracy, while the converse is true with our method. To investigate this, we compute the pairwise disentanglement error ξ <cit.>, which measures the percentage of data with inconsistent predictions when two task vectors are combined (more details in Appendix <ref>). Results in Figure <ref> show that standard task vectors with learned anisotropic scaling achieve the lowest average error, indicating less interference in task vector composition. Along with higher fine-tuning accuracy, previously referred to as the non-linear advantage <cit.>, standard task vectors demonstrate stronger performance in task addition. Furthermore, we again observe that weight matrices have consistently larger coefficients in Figure <ref>, and learning coefficients on weight matrices alone results in an accuracy of 84.17 (vs. 84.98) using ViT-B/32. This suggests that weight matrices in transformers are the primary knowledge carrier, which enabled knowledge composition and negation. Note that for better clarity in visualisation, we add L_1 regularisation on the learned coefficients during learning, which causes marginal performance drop (84.23 vs. 84.98) but significantly improves interpretability. In addition, we observe substantially higher coefficients on deeper layers (Figure <ref>). This aligns with our understanding that early layers extract generic features that do not vary significantly across datasets <cit.>, while the deeper layers produce task-specific features and require more careful adaptations. § KNOWLEDGE TRANSFER IN LOW-DATA REGIMES Beyond model editing for task arithmetic, we explore the idea of transferring existing knowledge in task vectors to previously unseen tasks. To this end, we use the CLIP <cit.> model and a total of 22 image classification datasets, each of which produces a task vector. We defer the details of datasets and the process to acquire task vectors to Appendix <ref>. Denote the set of available task vectors by T = {_i }_i=1^n, and the dataset corresponding to task vector _i by _i. For each target dataset _t, we learn task vector compositions using the subset T ∖{_t}, excluding the task vector for the target dataset to avoid information leakage. We test our method in few-shot and test-time adaptation, to demonstrate its effectiveness in low-data regimes. Notably, we observe that task vectors complement existing few-shot methods. Combining with them thus leads to significant improvements. §.§ Few-shot adaptation Few-shot recognition requires learning new objects or concepts using a limited amount labelled data—k per class for k-shot. Following previous practice <cit.>, we approach this problem by adapting a pre-trained CLIP model <cit.> to each target dataset _t. We use the subset of task vectors T ∖{_t} and k ∈{1, 2, 4, 8, 16} images from dataset _t. During training, we adopt the cross-entropy loss and minimise objectives described in Eqs. <ref> and <ref> for standard and linear task vectors, respectively. We compare against Tip-Adapter <cit.> and LP++ <cit.> using CLIP with ViT-B/32 backbone, across 22 datasets over three random seeds, and summarise the results in Figure <ref>. We show that with k = 1, our approach, , significantly outperforms previous methods, demonstrating the effectiveness of knowledge transfer with scarce labelled data. More importantly, we note that the idea of task vector composition is highly complementary to those presented in previous methods. As such, combining with them results in significant improvements. This is also illustrated in Figure <ref> as a Venn diagram, where we show the percentage of examples in the validation set that are incorrectly classified by the pre-trained model but correctly classified with few-shot methods. Out of the examples improves upon, around half are unique compared against either Tip-Adapter or LP++, demonstrating its complementarity. We also found that standard task vectors generally perform better than their linearised counterparts, and so defer the results of linear task vectors to Appendix <ref>. In addition, due to the low number of learnable parameters, exhibits strong generalisability. To demonstrate this, we learn task vector composition on ImageNet <cit.>, and test it on out-of-domain (OOD) datasets: ImageNet-A <cit.>, ImageNet-R <cit.>, ImageNet-sketch <cit.> and ImageNetV2 <cit.>. We summarise the results in Figure <ref>, which shows the performance difference against the pre-trained model. Notably, is the only method that consistently improves upon the pre-trained model on OOD datasets, and combining with other methods can improve their generalisability. We also test our method and variants integrated with Tip-Adapter and LP++ using other backbones, including ViT-{B/16, L/14} and ResNet-{50, 101}, and find that the results are consistent with those for ViT-B/32. More details can be found in Appendix <ref>. §.§ Task vector budget and selection In practical applications, there may only be a limited number of task vectors available, or the number of task vectors used in training may be restricted due to memory constraints. To this end, we study the influence of task vector budget b on few-shot recognition performance. We experiment with four selection strategies: (1) random selection; (2) feature-based selection; (3) gradient-based selection; and (4) blockwise gradient-based selection. To elaborate, feature-based selection computes the mean image feature representation of each dataset, and selects b task vectors from datasets most similar to the target dataset. Gradient-based selection computes the gradient with respect to each of the learnable coefficients, and either select entire task vectors with the highest L_1 gradient norm, or select task vectors with the highest blockwise gradient for the corresponding parameter block, and repeat the process for all parameter blocks. The blockwise selection therefore allows parameter blocks across different task vectors to be mixed prior to training. More details can be found in Appendix <ref>. r0.43 font=footnotesize width=0.42 [ width=6cm, height=6cm, font=, ymin=63, ymax=73, legend style=draw=none,at=(0.55,1.3), text width=1.3cm, anchor=north, legend columns=6, fill=none, nodes=scale=0.8, transform shape,, xlabel=Task vector budget (b), ylabel=Accuracy (%), ymajorgrids=true, ytick=64,66,68,70,72,74, yticklabel=, ylabel shift=-5pt, xlabel shift=-3pt, axis x line*=bottom, axis y line*=left, symbolic x coords=1,2,5,10,15,21, cycle list=color1,color2,color3,color4,color5,color6,color7,color8,color9,color21, xtick=1,2,5,10,15,21, xticklabel=, minor x tick num=1, xminorgrids, minor tick length=0, major x tick style = transparent, mark size=1.0pt, ] [name path=random_up_16,color=color1!50] coordinates (1,64.8)(2,66.1)(5,68.9)(10,70.8)(15,72.1)(21,72.9); [name path=random_down_16,color=color1!50] coordinates (1,64.6)(2,65.9)(5,68.7)(10,70.8)(15,71.9)(21,72.7); [color1!50,fill opacity=0.5] fill between[of=random_up_16 and random_down_16]; +[mark=*,mark size=2pt,mark options=fill=color1,name path=random_16,color=color1] coordinates (1,64.7)(2,66.0)(5,68.8)(10,70.8)(15,72.0)(21,72.8); [name path=feat_up_16,color=color2!50] coordinates (1,66.2)(2,68.3)(5,70.3)(10,71.8)(15,72.5)(21,72.9); [name path=feat_down_16,color=color2!50] coordinates (1,66.2)(2,67.9)(5,70.3)(10,71.6)(15,72.3)(21,72.7); [color2!50,fill opacity=0.5] fill between[of=feat_up_16 and feat_down_16]; +[mark=square*,mark size=2pt,mark options=fill=color2,name path=feat_16,color=color2] coordinates (1,66.2)(2,68.1)(5,70.3)(10,71.7)(15,72.4)(21,72.8); [name path=gradbest_up_16,color=color3!50] coordinates (1,65.3)(2,66.3)(5,68.5)(10,71.6)(15,72.3)(21,72.9); [name path=gradbest_down_16,color=color3!50] coordinates (1,65.1)(2,66.1)(5,68.1)(10,71.4)(15,72.1)(21,72.7); [color3!50,fill opacity=0.5] fill between[of=gradbest_up_16 and gradbest_down_16]; +[mark=triangle*,mark size=3pt,mark options=fill=color3,name path=gradbest_16,color=color3] coordinates (1,65.2)(2,66.2)(5,68.3)(10,71.5)(15,72.2)(21,72.8); [name path=blockwise_up_16,color=color21!50] coordinates (1,68.4)(2,69.4)(5,70.6)(10,71.6)(15,72.3)(21,72.9); [name path=blockwise_down_16,color=color21!50] coordinates (1,68.2)(2,69.2)(5,70.4)(10,71.6)(15,72.3)(21,72.7); [color21!50,fill opacity=0.5] fill between[of=blockwise_up_16 and blockwise_down_16]; +[mark=diamond*,mark size=3pt,mark options=fill=color21,name path=blockwise_16,color=color21] coordinates (1,68.3)(2,69.3)(5,70.5)(10,71.6)(15,72.3)(21,72.8); [ xmin=1, xmax=2, ymin=1, ymax=2, hide axis, width=6cm, height=6cm, mark size=1.0pt, legend style= at=(0.8,0.0), anchor=south, draw=none, legend columns=1, fill opacity=0.8, nodes=scale=0.7, transform shape, cells=align=left, , legend cell align=left, ] + [mark=*, mark size=2pt, mark options=fill=color1,color=color1, line width=0.7pt,solid] coordinates (0,0) ; Random + [mark=square*, mark size=2pt, mark options=fill=color2,color=color2, line width=0.7pt,solid] coordinates (0,0) ; Features + [mark=triangle*, mark size=2.5pt, mark options=fill=color3,color=color3, line width=0.7pt,solid] coordinates (0,0) ; Grad. whole + [mark=diamond*, mark size=2.5pt, mark options=fill=color21,color=color21, line width=0.7pt,solid] coordinates (0,0) ; Grad. blockwise Few-shot performance of with various task vector budgets. The accuracy is averaged across 22 datasets and over three random seeds. Standard deviation × 1 is overlaid as the error margin. Performance under the 16-shot setting is visualised, while additional detailed results are included in Table <ref>. For a task vector budget b ∈{1, 2, 5, 10, 15, 21}, we summarise the few-shot recognition performance in Figure <ref>. First, we note that the accuracy of does not plateau with the maximum number of task vectors available (21), indicating that more task vectors could be beneficial. Second, we find that selecting task vectors based on feature similarity is a simple yet effective approach with sufficient budgets (b > 5). Selecting whole task vectors with gradient is less effective, generally on par with random selection. Nevertheless, the blockwise variant achieves the best accuracy, particularly for very low budgets (b ∈{1, 2}), as it is able to exploit knowledge from more task vectors than the budget dictates. We thus deduce that parameter blocks can function as knowledge carriers in isolation, independent of the task vectors to which they belong. In fact, a parameter block ^(1) as part of the task vector =[^(1), …, ^(m)] can be considered as a task vector by itself, [^(1), 0, …, 0]. This modular nature underscores the potential of task vectors for flexible and efficient knowledge transfer. §.§ Test-time adaptation Test-time adaptation (TTA) <cit.> assumes no labelled data is available for the target task, requiring the model to adapt in an unsupervised fashion. We conduct experiments under the offline adaptation setting, which allows access to the target dataset. We consider three categories of self-supervised techniques for TTA: constrastive objectives, entropy objectives and pseudo labelling. Contrastive objectives align representations of the same image under different data augmentations. For this category, we adopt SimCLR <cit.>, a simple yet effective method. Entropy objectives encourage the pre-trained model to produce confident predictions on unseen datasets by minimising the entropy over the predictions. While effective in simpler cases, it can lead to catastrophic collapse on complex tasks. Therefore, we utilise a state-of-the-art sharpness-aware entropy minimisation algorithm named SAR <cit.>. Last, we experiment with an unsupervised pseudo-labelling algorithm inspired by FixMatch <cit.>, which we refer as unsupervised FixMatch (UFM). UFM selects an equal number of highly confident examples per class as the labelled set, and then employs FixMatch to produce pseudo-labels from rest of the unlabelled examples. Details are available in Appendix <ref>. We summarise the results in Table <ref> and compare our method, learning task vector compositions, against the conventional approach of tuning the layer normalisation parameters <cit.>. We show that under all self-supervised objectives, achieves higher accuracy than tuning the LayerNorm. In particular, LayerNorm has 30k learnable parameters with ViT-B/32 while our method only has 3.5k learnable parameters. We note that with the UFM objective, performs the best and improves the accuracy by an average of 6.5 absolute points over the zero-shot baseline. § RELATION TO PARAMETER-EFFICIENT FINE-TUNING One of the key advantages of is its ability to adapt pre-trained models with few learnable parameters, making it suitable for parameter-efficient fine-tuning (PEFT). Similar to popular PEFT methods such as low-rank adaptation (LoRA) <cit.>, our approach does not introduce additional modules, thereby avoiding an increase in inference complexity. In addition, since only the encoded weight matrices in LoRAs have non-zero weight difference, LoRAs are in fact sparse task vectors. They can thus be seamlessly integrated into our method, significantly reducing the memory cost. §.§ LoRAs as task vectors Due to the sparsity and rank deficiency, LoRAs as task vectors may have limited representation capacity and carry less knowledge. Therefore, they may be inferior to standard task vectors for knowledge transfer. We investigate this by learning linear combinations of LoRAs[Details about the process to acquire LoRAs are included in Appendix <ref>.] using our method, under the settings of few-shot recognition. Results are summarised in Table <ref>. We first shed light on the impact of sparsity, and compare two variants of our method that either learns linear combinations of all parameter blocks or just the weight matrices. Results show that sparsity results in an accuracy decrease of around 0.5% on average, except for the one-shot setting. The rank deficiency, on the other hand, causes more substantial accuracy drop. Nevertheless, this can be largely mitigated by increasing the rank. Using a rank of 64 leads to similar performance compared to learning compositions of only weight matrices in standard task vectors. In conclusion, while the sparsity and rank deficiency introduce some performance drops, especially in low-shot settings, LoRAs are competitive alternatives to standard task vectors due to their low memory cost. r0.5 font=footnotesize width=.46 [ width=5.5cm, height=4.5cm, font=, ymin=68, ymax=85, legend style=draw=none,at=(0.55,1.3), text width=1.3cm, anchor=north, legend columns=6, fill=none, nodes=scale=0.8, transform shape,, xlabel=Percentage of training data (%), ylabel=Accuracy (%), ymajorgrids=true, ytick=60,64,68,72,76,80,84, yticklabel=, ylabel shift=-5pt, xlabel near ticks, axis x line*=bottom, axis y line*=left, symbolic x coords=1,5,10,25,35, 50, 100, cycle list=color1,color2,color3,color4,color5,color6,color7,color8,color9,color21, xtick=1,5,10,25, 35, 50, 100, x tick label style=anchor=east, align=right,text width=0.97cm, yshift=-0.2cm, xshift=0.1cm, xticklabel=, minor x tick num=1, xminorgrids, minor tick length=0, major x tick style = transparent, mark size=1.0pt, ] +[mark=*,mark size=2, mark options=fill=color1, name path=tip_0, opacity=0.15, color=color1,] coordinates (1,68.5) (5,71.5) (10,72.6) (25,73.6) (35,74.6) (50,75.4) (100,76.4); +[mark=*,mark size=2, mark options=fill=color1, name path=tip_3, opacity=0.25, color=color1,] coordinates (1,69.3) (5,72.9) (10,74.7) (25,76.2) (35,76.8) (50,77.5) (100,78.8); +[mark=*,mark size=2, mark options=fill=color1, name path=tip_3, opacity=0.4, color=color1,] coordinates (1,69.5) (5,74.0) (10,75.6) (25,77.5) (35,78.2) (50,78.9) (100,80.5); +[mark=*,mark size=2, mark options=fill=color1, name path=tip_3, opacity=0.55, color=color1,] coordinates (1,70.2) (5,74.7) (10,76.2) (25,77.9) (35,78.9) (50,80.0) (100,82.0); +[mark=*,mark size=2, mark options=fill=color1, name path=tip_4, opacity=1, color=color1,] coordinates (1,71.3) (5,75.0) (10,76.6) (25,78.3) (35,80.2) (50,81.5) (100,83.9); +[mark=triangle*,mark size=2.5, mark options=fill=color2, name path=tip_4, color=color2,] coordinates (1,68.8) (5,74.1) (10,75.6) (25,76.8) (35,79.0) (50,80.6) (100,83.6); [ xmin=1, xmax=2, ymin=1, ymax=2, hide axis, width=4cm, height=3cm, font=, mark size=1.0pt, legend style= at=(1.23,0.0), anchor=south, draw=none, legend columns=1, fill opacity=0.8, nodes=scale=0.45, transform shape, cells=align=left, , legend cell align=left, ] +[mark=*, mark size=1.3, mark options=fill=color1, color=color1, opacity=0.15, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; aTLAS (2k) +[mark=*,mark size=1.3, mark options=fill=color1, color=color1, opacity=0.25, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; aTLAS ×5 (10k) +[mark=*,mark size=1.3, mark options=fill=color1, color=color1, opacity=0.4, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; aTLAS ×20 (40k) +[mark=*,mark size=1.3, mark options=fill=color1, color=color1, opacity=0.55, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; aTLAS ×80 (160k) +[mark=*,mark size=1.3, mark options=fill=color1, color=color1, opacity=1, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; aTLAS ×1200 (2.4M) +[mark=triangle*,mark size=2, mark options=fill=color2, color=color2, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; LoRA (2.4M) Scalability of . We compare the accuracy of our method against LoRAs, and vary the amount of training data. Results are averaged over 22 datasets. Detailed results are included in Table <ref>. §.§ Scalability of Despite the parameter efficiency of , its performance is not as competitive when sufficient training data is available. To address this, we devise a strategy to flexibly scale up the number of learnable parameters as needed. Specifically, we randomly divide each parameter block into K partitions, and assign a learnable coefficient to each partition, naturally increasing the number of learnable parameters by K-fold. We denote these variants by × K. We conduct experiments with these variants using {1, 5, 10, 25, 35, 50, 100}% of the total available training data across the 22 datasets used in Section <ref>. The results are summarised in Figure <ref>, showing that our method consistently improves as K increases. Compared to LoRAs, particularly with limited training data, our method achieves higher performance with fewer learnable parameters. With sufficient training data, the variant × 1200 leads to higher performance with a similar number of learnable parameters, as it is able to exploit the knowledge contained in the task vectors that may otherwise be unobtainable from the target dataset. § RELATED WORK Task vectors and model compositions. Recent studies have demonstrated the possibility of manipulating the behaviours of neural networks directly in the weight space <cit.>. In particular, task vectors <cit.>, as a carrier of the domain-specific knowledge learned through fine-tuning, exhibit strong compositional properties. Such compositionality can be enhanced via linearisation using first-order Taylor expansion <cit.>, and improves model editing with simple arithmetic, addition, negation, . Low-rank adaptations  <cit.>, as special forms of task vectors, were shown to also support such arithmetic operations. A recent study <cit.> also investigated the idea of learning combinations of LoRAs for few-shot recognition. Model-based transfer learning. One interpretation of transfer learning <cit.> is to exploit the knowledge encapsulated in a pre-trained model for a target domain. Amongst various sub-modules of a pre-trained model, transferring the feature extractor is the most extensively studied. This ranges from early convolutional neural networks <cit.> to modern transformers <cit.>, from vision backbones <cit.> to language models <cit.>. For vision applications, classification models trained on ImageNet <cit.> have been used as the medium for knowledge transfer. In recent years, contrastively pre-trained multi-modal models such as CLIP <cit.> have emerged as a prevelant choice. Such models are trained on large volumes of data by aligning image and language representations, leading to strong baselines well suited for transfer learning. CLIP representations have since been use for medical imaging <cit.>, semantic segmentation <cit.>, satellite imaging <cit.>, . Model adaptation in low-data regimes. The performance of pre-trained models is often constrained when applied to specific tasks with limited labelled data. To address this limitation, extensive research has been conducted on few-shot adaptation of CLIP <cit.>. These studies focus on various techniques, including prompt engineering <cit.>, feature adaptation <cit.>, and more recently classifier adaptation <cit.>. In addition to few-shot adaptation, test-time adaptation represents an even more challenging scenario where no annotated data is available. This typically requires leveraging self-supervised objectives to adapt the model, employing methods such as entropy minimisation <cit.>, contrastive learning <cit.>, pseudo labelling <cit.> and image rotation prediction <cit.>. § CONCLUSION In this paper, we introduced , a learning algorithm that leverages the rich knowledge encapsulated in task vectors through learned linear combinations with anisotropic scaling. Unlike conventional methods that learn network parameters, our approach focuses on learning coefficients on task vectors, significantly reducing the number of learnable parameters. We conducted experiments across task arithmetic, few-shot recognition, test-time adaptation and parameter-efficient fine-tuning, demonstrating the effectiveness of our method with supervised and unsupervised objectives. In particular, we highlighted several properties of , including low disentanglement error, robustness against domain shift, effectiveness in low-data regimes, complementarity with existing few-shot methods, . These properties paved the way for efficient knowledge composition and transfer. Limitations. As a task vector is defined with respect to a specific pre-trained model, knowledge composition and transfer are not yet feasible across different architectures. This may become possible with suitable projections and remains part of the future work. In addition, combining large numbers of task vectors can consumes a substantial amount of GPU memory when training larger models. This can be mitigated by selecting a subset of task vectors, using LoRAs as task vectors or by offloading the computation of task vector composition to CPU, at the cost of training speed decrease. It is also possible to perform task vector composition at bit-width lower than floating point precision, 4-bit. Similar features are being tested with popular deep learning frameworks such as PyTorch, and we expect the memory requirement of larger models to be less of a constraint in the future. Acknowledgements. This research is funded by the Centre of Augmented Reasoning at the Australian Institute for Machine Learning, established by a grant from the Department of Education. We would like to thank Stephen Gould for his valuable feedback on the paper. plainnat § DATASETS AND TASK VECTORS We acquire task vectors by fine-tuning CLIP <cit.> on a variety of 22 image recognition datasets: (1) Stanford Cars <cit.>, (2) DTD <cit.>, (3) EuroSAT <cit.>, (4) GTSRB <cit.>, (5) MNIST <cit.>, (6) RESISC45 <cit.>, (7) SUN397 <cit.>, (8) SVHN <cit.>, (9) CIFAR10 <cit.>, (10) CIFAR100 <cit.>, (11) ImageNet <cit.>, (12) STL10 <cit.>, (13) Food101 <cit.>, (14) Caltech101 <cit.>, (15) Caltech256 <cit.>, (16) FGVCAircraft <cit.>, (17) Flowers102 <cit.>, (18) Oxford Pets <cit.>, (19) CUB200 <cit.>, (20) PascalVOC <cit.>, (21) Country211 <cit.>, and (22) UCF101 <cit.>. Fine-tuning was conducted using AdamW optimiser <cit.>, with a learning rate of 10^-5, batch size of 128 and weight decay of 0.1. Details of the datasets, additional dataset-specific hyper-parameters, and the accuracy after fine-tuning for an assortment of backbones are shown in Table <ref>. We use the same hyper-parameters for the linearised variants of the model. To shed light on the semantic relationships amongst datasets, we extract the features of all images for each dataset, and visualise the distributions as ellipses (Figure <ref>). Specifically, for each dataset, the mean _t ∈^d and covariance Σ_t ∈^d × d of image features are computed. Principal component analysis (PCA) is used produce a projection matrix P ∈^d × 2 from the mean features _t. Subsequently, the mean and covariance with reduced dimensionality can be expressed as P _t and P Σ_t P, respectively. § TASK NEGATION The evaluation of task negation is conducted on eight classification datasets (1–8 in Table <ref>), following previous practice <cit.>. In particular, we learn anisotropic scaling using the validation set of each dataset. We also adjust the learning rates and training epochs on the same validation set. The details are shown in Table <ref>. We report detailed task negation results for each dataset in Table <ref>. In addition, for more evidence that weight matrices learn large negative coefficients, we show a detailed visualisation of the learned coefficients in Figure <ref> and distribution of the coefficients in Figure <ref>. § TASK ADDITION Task addition is also evaluated on datasets 1–8 shown in Table <ref>. The hyper-parameters are identical to fine-tuning, except the learning rate is modified to 10^-3. We show detailed performance on each dataset in Table <ref>, where we compare our method against hyper-parameter search used in previous works <cit.>, and another variant with learned isotropic scaling. We also visualise the learned coefficients with L_1 regularisation in Figure <ref>. It can be easily observed that weight matrices, particularly those in the deeper layers, have significantly higher learned coefficients, which conforms to our observations in Figures <ref> and <ref>. §.§ Comparison against full-parameter optimization r0.5 font=footnotesize width=0.4 [ width=6cm, height=4.5cm, font=, ymin=55, ymax=85, legend style=draw=none,at=(0.55,1.3), text width=1.3cm, anchor=north, legend columns=6, fill=none, nodes=scale=0.8, transform shape,, xlabel=Percentage of dataset used (%), ylabel=Task addition accuracy (%), ymajorgrids=true, ytick=55,60,65,70,75,80,85, yticklabel=, ylabel shift=-5pt, xlabel shift=-3pt, axis x line*=bottom, axis y line*=left, symbolic x coords=100,75,50,35,20,10,5,1, cycle list=color1,color2,color3,color4,color5,color6,color7,color8,color9,color21, xtick=1,5,10,20,35,50,75,100, xticklabel=, minor x tick num=1, xminorgrids, minor tick length=0, major x tick style = transparent, mark size=1.0pt, ] +[mark=*, mark size=2, mark options=fill=color1, name path=random, color=color1,] coordinates (1, 76.84)(5, 81.28)(10, 82.70)(20, 83.63)(35, 84.66)(50, 84.66)(75, 84.69)(100, 84.82); +[mark=square*, mark size=2, mark options=fill=color2, name path=task_vector, color=color2,] coordinates (1, 64.98) (5, 67.16) (10, 68.98) (20, 69.69) (35, 70.10) (50, 70.14) (75, 70.21) (100, 70.12); +[mark=triangle*, mark size=2.5, mark options=fill=color3, name path=fine_tune, color=color3,] coordinates (1, 56.72) (5, 67.34) (10, 70.73) (20, 73.88) (35, 77.34) (50, 78.05) (75, 79.99) (100, 80.91); [ xmin=1, xmax=2, ymin=1, ymax=2, hide axis, width=6cm, height=6cm, mark size=1.0pt, legend style= at=(0.25,0.0), anchor=south, draw=none, legend columns=1, fill opacity=0.8, nodes=scale=0.7, transform shape, cells=align=left, , legend cell align=left, ] +[mark=*, mark size=2, mark options=fill=color1, color=color1, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; +[mark=square*, mark size=2, mark options=fill=color2, color=color2, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; Search +[mark=triangle*, mark size=2, mark options=fill=color3, color=color3, line width=0.7pt,solid,mark repeat=1,mark phase=1] coordinates (0,0) ; Fine-tune Task addition accuracy averaged across eight datasets (1–8) versus different percentage of validation data used. Standard task vectors are used. Since our method involves learning the coefficients, unlike previous methods <cit.> that only require a hyper-parameter search, we also compare against the direct fine-tuning approach. We fine-tune the pre-trained model on the union of eight datasets, assuming only the validation sets are available. The results are shown in Figure <ref>. Unsurprisingly, task vector compositions, whether the coefficients are searched or learned, are less susceptible to the lack of data, as the accuracy only starts to drop with less than 35% of the data. The performance of full-parameter fine-tuning, however, drops substantially as the amount of data available decreases. §.§ Disentanglement error In addition, we provide more technical details and intuitions on the pairwise disentanglement error <cit.>, which was visualised in Figure <ref>. Specifically, we make a few changes to the formulation proposed by Ortiz-Jiménez  <cit.>, and evaluate the disentanglement error only with the optimal coefficients. Given two datasets _1, _2 and the respective task vectors _1, _2, we overload the definition of function f to denote the mapping from data space to the label space , and define the disentanglement error as ξ(_1, _2) = ∈_1δ( f(; _0 + Λ^⋆_1 _1), f(; _0 + Λ^⋆_1 _1 + Λ^⋆_2 _2) ), ξ(_2, _1) = ∈_2δ( f(; _0 + Λ^⋆_2 _2), f(; _0 + Λ^⋆_1 _1 + Λ^⋆_2 _2) ), where Λ^⋆_1, Λ^⋆_2 are the learned coefficients in task addition, and δ is defined as δ(x_1, x_2) = 0 x_1 = x_2, 1 x_1 ≠ x_2. The error metric ξ(_1, _2) measures the percentage of data in dataset _1, such that when a second task vector _2 is added to the model, the predicted labels differ from when only using task vector _1. As task vector _1 is acquired from dataset _1, a low disentanglement error indicates that most predictions made by _1—highly likely to be correct—will be retained, thus resulting in higher performance in task addition. § FEW-SHOT LEARNING §.§ Baselines: Tip-Adapter and LP++ Two variants of Tip-Adapter <cit.> were proposed for few-shot recognition where the weights of the adaptor are either fixed based on features of the few-shot examples or further fine-tuned. We only study the fine-tuned variant due to its higher performance. Tip-Adapter has two hyper-parameters, which in the original paper are optimised through hyper-parameter search on a separate validation set. This practice may not align with the principles of few-shot learning, where access to extensive validation data is typically limited. In addition, Huang  <cit.> note that the performance of Tip-Adapter is very sensitive to these hyper-parameters. We thus opt to learn these two hyper-parameters together with the feature adaptor through gradient descent. The learning rates for the feature adaptor and the hyper-parameters are set to 10^-3 and 10^-1, respectively. For both Tip-Adapter and LP++ <cit.>, we conduct experiments using the publicly available codebase [<github.com/fereshteshakeri/fewshot-clip-strong-baseline>]. We train both LP++ and Tip-Adapter for 300 epochs on frozen zero-shot features. We apply a cosine annealing decay for Tip-Adapter and maintain fixed learning rates for LP++ as per the official implementation. §.§ linearised task vectors We report the average few-shot accuracy over the 22 datasets in Table <ref>, which corresponds to results in Figure <ref>. In particular, we show results with linearised task vectors, as proposed by Ortiz-Jiménez  <cit.>. As highlighted in Section <ref>, learned anisotropic scaling allows standard task vectors to achieve stronger performance than the linear variants in task addition. For few-shot recognition, we again observe that standard task vectors result in superior performance in most cases. We, however, note the exception that linear task vectors when combined with LP++ achieve higher performance in the 1-shot setting. Nevertheless, the margin over standard task vectors is not very significant, and using standard task vectors when integrated with Tip-Adapter is generally a stronger few-shot model. §.§ Integrating state-of-the-art methods into We use the AdamW <cit.> optimiser with a learning rate of 10^-1 and a weight decay of 10^-1. Our method by itself is trained for 10 epochs with ViT backbones and 30 epochs with ResNet backbones. We show that state-of-the-art few-shot methods can be seamlessly integrated into our method, since both Tip-Adapter and LP++ focus on the classifier, while improves the feature representations. We experiment with two strategies to combine with previous methods, where we either (1) train our method first and use the frozen representations to train a previous method, or (2) train parameters in both methods jointly. Results in Table <ref> shows that the joint training strategy results in higher performance, particularly in low-shot settings. We therefore adopt the joint training strategy when combing our method with Tip-Adapter. During training, we adopt different learning rates for different parameter groups, that is, 10^-1 for learnable coefficients in and the hyper-parameters in Tip-Adapter, and 10^-3 for the adaptor. The joint training takes 20 epochs for ViT backbones and 60 epochs on ResNet backbones, twice the number of epochs when training alone. On the other hand, The joint training strategy with LP++ is non-trivial, due to LP++'s super-convergence strategy being designed around frozen feature representations, which would have been updated every iteration by . We thus use the sequential strategy to combine and LP++. We include detailed results for each dataset with ViT-B/32 in Table <ref> and additional results with different backbones in Table <ref>, where we show our method scales well across different datasets and backbones. §.§ Out-of-domain generalisation We show detailed results for out-of-domain generalisation over k ∈{4, 16} shots in Table <ref>. These results correspond to those presented in Figure <ref>. is the only method that consistently improves test accuracy over the zero-shot model on out-of-domain images. When combined with LP++ or Tip-Adapter, can be observed to improve the out-of-domain generalisation of these methods. §.§ Relative significance of individual task vectors In this section, we examine the informativeness of a task vector across different target datasets. To this end, we apply to each of the 22 datasets using only one task vector. For each dataset, we compute the relative accuracy improvement, that is, the accuracy improvement of normalised by that of fine-tuning in the full parameter space. Note that is applied under the 16-shot setting, while standard fine-tuning uses all training data available. Results are shown in Figure <ref>. We first note that certain datasets are more prone to accuracy improvement, such as EuroSAT, MNIST, ., as indicated by the high percentage across entire rows. This is most likely due to the low intrinsic dimensionality of the task. In addition, we highlight the average improvement in the last row. Notably, certain task vectors, ImageNet task vector, are particularly informative while others, such as those from Flowers102 and OxfordPets are much less so. These results illustrate the varying contributions different task vectors can have depending on the target dataset, which also motivated subsequent efforts on careful task vector selection. §.§ Task vector budget and selection In this section, we provide details for selecting a budget of b task vectors with feature-based and gradient-based strategies, as introduced in Section <ref>. Feature based selection. For each dataset _i, we compute the average image representation _i of the dataset using the zero-shot model as follows _i = ∈_if(; _0). Given a target dataset _t, we simply compute the cosine similarity between its feature representation _t and that of each other dataset _i, i≠ t. Subsequently, b task vectors corresponding to the datasets with highest similarity will be selected. Gradient-based selection. Given a target dataset _t, we may directly compute the gradient with respect to the m learnable coefficients for each of the n task vectors. However, as one important motivation behind task vector selection is to reduce memory consumption, using all n task vectors to compute the gradient defeats the purpose. Therefore, we instead only load a group of b task vectors (b < n), compute the gradient with respect to their learnable coefficients, and repeat for other groups. With this sequential computation, the gradient across different groups is not calibrated. Nevertheless, we empirically found this strategy to work well. Denote the partial derivative of the loss on dataset _t with respective to a learnable coefficient λ^(j)_i by λ̇^(j)_i, such that λ̇^(j)_i = (, ) ∈_t∂( f(; _0 + ∑_i=1^bΛ_i _i ), )/∂λ^(j)_i. For the i-th task vector, we may compute its L_1 gradient norm, ‖λ̇^(1)_i, …, λ̇^(m)_i ‖_1, and select task vectors with larger gradient. Alternatively, we may select task vectors block by block. Specifically, for the j-th parameter block, we inspect the absolute values of the partial derivatives for the corresponding coefficients, | λ̇^(j)_i |, and select task vectors with higher absolute values. This process is repeated for each parameter block, thus allowing different parameter blocks to have different selections. Crucially, for low budgets, particularly b=1, this enables our method to effectively exploit more task vectors than the budget specifies. The impact of this can be observed in Table <ref> (corresponding to Figure <ref>), that blockwise selection significantly outperforms other methods when the budget is low. §.§ LoRAs as task vectors We fine-tune LoRAs for ViT-B/32 using the LoRA-Torch <cit.> library with ranks 4, 16 and 64. We stop at rank 64 as we do not observe improvements beyond it. We train LoRAs on attention and MLP layers and use the same settings as for full finetuning but with a learning rate of 10^-3. Table <ref> shows additional results using LoRAs as task vectors. We study learning the effect of fine-tuning the LoRAs task vectors on attention layers only (as done in the original LoRA paper <cit.>) or on the MLPs. Although the original LoRA paper recommendeds training on the attention layers only <cit.>, we observe that training on MLP layers is important to produce strong LoRA task vectors. §.§ Gradient-free optimization An alternative to save memory during training is to utilise gradient-free methods to learn the coefficients. We follow previous work on the combination of LoRAs <cit.> and use the nevergrad <cit.> library. We observe a memory usage reduction of 60% from 10GB to 4GB calculated using a dedicated pytorch function[<https://pytorch.org/docs/stable/generated/torch.cuda.memory_allocated.html>]. Results for few-shot recognition are summarised in Table <ref>. We show that although gradient-free optimization improves upon the zero-shot model, the performance quickly plateaus as the amount of data increases. In addition, learning anisotropic scaling results in worse performance, most likely due to the relatively high number of parameters. § UNSUPERVISED FIXMATCH We provide more details on the Unsupervised FixMatch (UFM) approach in this section. FixMatch <cit.> utilises a labelled set to guide training, which is given as part of the semi-supervised learning protocol, while we produce a class-balanced “labelled” set from unlabelled images. Given a target dataset _t consisting of N unlabelled images, we first rank the examples by the prediction scores from the zero-shot model across C classes. We then select the top min(N/C, 100) examples, that is, at most 100 examples per class, as a trusted set in absence of a labelled set. The standard cross-entropy loss is applied to the trusted set. For the rest of the unlabelled images, we use a weakly augmented (Open-CLIP <cit.> validation augmentations) view of an image to produce pseudo-labels, and incur a loss on the strongly augmented view (Tip-Adapter <cit.> augmentations). Denote an image with weak augmentation by , its strongly augmented view by ', and the predictions made by network by and ', respectively, the unsupervised loss can be expressed as ℓ_u(, ') = - 1 (max(σ())>ω ) σ() log(𝐲̂'), σ() = ^0.5/^0.5, where 1(·) denotes the indicator function, σ(·) performs re-normalisation with adjusted temperature scaling, and ω is a confidence threshold that is linearly adjusted from 0.9 to 1 during training. The trusted set is re-estimated at the beginning of each epoch to account for the improving accuracy of the model. In training, images in the trusted set are over-sampled to constitute one fourth of each batch, as this practice prevents the model from diverging due to confirmation bias <cit.>. § DETAILS OF × K VARIANTS  Dividing a parameter block into K random partitions allows us to introduce more learnable coefficients to each block, thus scaling up our method flexibly. One draw back of this approach, however, is that masks for the partitions have to be stored in memory, resulting in a linear memory increase with respect to the size of the parameter block and the value K. To reduce the memory consumption the of × K variants, we only apply it to LoRAs task vectors. Nevertheless, these memory requirements could most likely be reduced by exploiting sparse matrices or memory efficient matrix indexing techniques, which we plan to investigate in the future.
http://arxiv.org/abs/2407.02995v1
20240703104941
Closed geodesics and the first Betti number
[ "Gonzalo Contreras", "Marco Mazzucchelli" ]
math.DS
[ "math.DS", "math.DG", "math.SG", "58E10, 53C22" ]
§ ABSTRACT We prove that, on any closed manifold of dimension at least two with non-trivial first Betti number, a C^∞ generic Riemannian metric has infinitely many closed geodesics, and indeed closed geodesics of arbitrarily large length. We derive this existence result combining a theorem of Mañé together with the following new theorem of independent interest: the existence of minimal closed geodesics, in the sense of Aubry-Mather theory, implies the existence of a transverse homoclinic, and thus of a horseshoe, for the geodesic flow of a suitable C^∞-close Riemannian metric. Fermi Surface Nesting Driving the RKKY Interaction in the Centrosymmetric Skyrmion Magnet Gd2PdSi3 Takeshi Kondo July 8, 2024 =================================================================================================== § INTRODUCTION §.§ Background A long standing conjecture in Riemannian geometry asserts that any closed Riemannian manifold of dimension at least two has infinitely many closed geodesics. This conjecture holds for any simply connected closed Riemannian manifold whose rational cohomology ring is not generated by a single element, thanks to a combination of results of Gromoll and Meyer <cit.> and Vigué-Poirrier and Sullivan <cit.>. For non-simply connected Riemannian manifolds, the conjecture was confirmed by Bangert and Hingston <cit.> for closed manifolds whose fundamental group is infinite abelian (the most difficult case being ), and later generalized by Taimanov <cit.> to larger classes of closed manifolds, including those with infinite solvable fundamental group. The conjecture also holds for any Riemannian surface, and most notably for any Riemannian 2-sphere, thanks to a combination of results of Bangert <cit.> and Franks <cit.> or, alternatively, Hingston <cit.>. To the best of the authors' knowledge, these are the last results confirming the conjecture for any Riemannian metric on certain classes of manifolds. Among the remaining cases, the conjecture is still open for closed manifolds of dimension at least three having the rational cohomology of a compact rank-one symmetric space S^n, P^n, H P^n, or CaP^2. A result of Hingston <cit.>, later reproved by Rademacher <cit.> with a different argument, asserts that a C^4-generic Riemannian metric on any simply connected closed manifold with the rational cohomology of a compact rank-one symmetric space has infinitely many closed geodesics. When the fundamental group is infinite and non-abelian, C^4-generic existence results were proved only for specific classes of closed manifolds, see <cit.> and references therein. The general case of closed manifolds with infinite non-abelian fundamental group is still open. §.§ Main results In this paper, we prove a new existence result for closed geodesics and for homoclinics to closed geodesics, by means of Aubry-Mather theory <cit.>. We provide the statements after recalling some relevant definitions. We consider a closed Riemannian manifold (M,g) of dimension at least two with non-zero first Betti number. This latter condition is equivalent to the non-vanishing of the first de Rham cohomology group H^1(M;). For each closed 1-form σ on M, we can associate to each W^1,2 curve γ:[0,τ]→ M an action _σ(γ)=1/τ∫_0^τ(12γ̇(t)_g^2 - σ(γ̇(t)) ) dt. When γ is a loop, meaning that γ(0)=γ(τ), the value A_σ(γ) does not depend on the specific choice of σ, but only on the cohomology class [σ]∈ H^1(M;). From now on, in order to simplify the notation, we will omit the brackets and write σ for the cohomology class as well. Throughout this paper, by a geodesic γ:→ M we will always mean a non-constant solution of ∇_tγ̇≡0, where ∇_t is the Levi-Civita covariant derivative of (M,g). A closed geodesic is a geodesic γ:→ M such that γ=γ(τ_γ+·) for some minimal period τ_γ>0. We associate to any such closed geodesic the Riemannian length (γ):=τ_γγ̇_g and the action _σ(γ|_[0,τ_γ]). A closed geodesic γ is called minimal (in the sense of Aubry-Mather theory <cit.>) when, for some non-zero σ∈ H^1(M;), we have _σ(γ|_[0,τ_γ]) = inf_ζ_σ(ζ), where the infimum ranges over all τ>0 and W^1,2 loops ζ:[0,τ]→ M, ζ(0)=ζ(τ). We will say that γ is σ-minimal or (g,σ)-minimal if we need to specify the co­homology class and the Riemannian metric. Any closed geodesic γ lifts to a periodic orbit γ̇ of the geodesic flow on the sphere tangent bundle of radius γ̇_g. When γ is hyperbolic, meaning that γ̇ is a hyperbolic periodic orbit of the geodesic flow, it may admit transverse homoclinics, that is, geodesics distinct from γ and whose lifts to the sphere tangent bundle lie on transverse intersection points of the stable and unstable manifolds of γ̇. By a classical result from hyperbolic dynamics <cit.>, the presence of a transverse homoclinic implies the existence of a horseshoe for the geodesic flow. This further implies that the geodesic flow has positive topological entropy and exponential growth of the periodic orbits, and in particular that there are infinitely many closed geodesics of arbitrarily large length. The following is the main result of this article. Let (M,g_0) be a closed Riemannian manifold of dimension at least two. If there exists a minimal closed geodesic γ, then there exists a Riemannian metric g arbitrarily C^∞-close to g_0 such that γ is a hyperbolic closed geodesic of g with a transverse homoclinic. The existence of a transverse homoclinic to a hyperbolic closed geodesic after a C^2-small perturbation of the Riemannian metric was established by the first author in <cit.> for those closed manifolds of dimension at least two on which a C^2-generic Riemannian metric has infinitely many closed geodesics[In the main theorems in <cit.>, the requirement that a C^2-generic Riemannian metric on the considered closed manifold must have infinitely many closed geodesics does not appear due to an omission.]. In particular, the theorem holds for simply connected closed manifolds (which do not admit minimal closed geodesics). Our Theorem <ref>, instead, employs in an essential way a minimal closed geodesic, and achieves the transverse homoclinic with a perturbation of the Riemannian metric in the finer C^∞ topology. In the general setting of Tonelli Hamiltonians, but under the stronger assumption that the first Betti number of the underlying closed manifold M is at least two, the analogous of Theorem <ref> was established by the first author and Paternain <cit.> (in the Tonelli setting, the Hamiltonian function is perturbed with a potential). In a similar spirit, results on homoclinics were also obtained by Bolotin <cit.> using different methods. The essential novelty of our Theorem <ref> is that it allows the first Betti number of M to be equal to one. In particular, the hardest case is when the fundamental group π_1(M) is isomorphic to , for which the quest for homoclinics requires a min-max scheme inspired by the above mentioned result of Bangert and Hingston <cit.>. If there are no σ-minimal closed geodesics for some non-zero cohomology class σ, a result of Mañé <cit.> asserts that, even without perturbing the Riemannian metric, there exist infinitely many closed geodesics of arbitrarily large length. This, combined with Theorem <ref>, implies the following corollary. We denote by ^k(M) the space of smooth Riemannian metrics on M, endowed with the C^k topology. Let M be a closed manifold of dimension at least two with non-trivial first Betti number. Then, for each 2≤ k≤∞, there exists an open and dense subset of ^k(M) such that every Riemannian metric therein admits infinitely many closed geodesics of arbitrarily large length. We will actually prove a slight generalization of Theorem <ref>, allowing the assumptions to be satisfied only by a finite cover of the closed manifold M (Theorem <ref>), and derive a stronger version of the latter corollary (Corollary <ref>). §.§ Organization of the paper In Section <ref>, after recalling the needed background from Aubry-Mather theory, we prove Theorem <ref> and Corollary <ref>. In the Appendix, we prove a perturbation result for closed geodesics that will be needed in the proof of Theorem <ref>. § AUBRY-MATHER THEORY §.§ Preliminaries The proof of Theorem <ref> requires some tools from Aubry-Mather theory <cit.>. Let (M,g) be a closed Riemannian manifold of dimension at least two. We consider the geodesic flow ϕ^t=ϕ_g^t:TM→ TM defined on the whole tangent bundle. Its orbits have the form ϕ^t(γ̇(0))=γ̇(t), where γ:→ M is a geodesic or a constant curve. We denote by the space of probability measures μ on TM that are closed, meaning that ∫_TM df dμ=0, ∀ f∈ C^1(M). Within , we have two important classes of measures: * All those probability measures μ on TM that are invariant under the geodesic flow, i.e. ϕ^t_*μ=μ for all t∈. * All those probability measures μ_γ uniformly distributed along a continuous and piecewise smooth loop γ:[0,τ]→ M, γ(0)=γ(τ), i.e. ∫_TM F dμ_γ := 1/τ∫_0^τ F(γ̇(t)) dt, ∀ F∈ C^0(TM). Any μ∈ has a rotation vector ρ(μ)∈ H_1(M;), which is defined via the duality with de Rham cohomology classes σ∈ H^1(M;) by ⟨σ,ρ(μ)⟩ = ∫_TMσ(v) dμ(v). Here, as well as later on, within the integral we chose an arbitrary closed 1-form representing σ, which we still denoted by σ with a slight abuse of notation. Since μ is a closed measure, the value of the integral is independent of the choice of such a closed 1-form. For each σ∈ H^1(M;), we consider the Lagrangian action functional _σ=_g,σ:→(-∞,∞], _σ(μ) = ∫_TM(12v_g^2 - σ(v) ) dμ(v). Notice that _σ(μ)=_0(μ)-⟨σ,ρ(μ)⟩. The notation for the action _σ is consistent with the one introduced in (<ref>): for each continuous and piecewise smooth loop γ:[0,τ]→ M, γ(0)=γ(τ), with associated probability measure μ_γ, we have _σ(γ)=_σ(μ_γ). The action functional _σ is bounded from below and achieves its minimum on . Any minimizer turns out to be invariant under the geodesic flow, and is called a σ-minimal measure (or a (g,σ)-minimal measure if we need to specify the Riemannian metric). Mather alpha function α=α_g:H^1(M;)→ is defined by α(σ):=-min__σ. Alternatively, instead of minimizing over the space of closed measures, Mather alpha function α:H^1(M;)→ is also characterized by α(σ)=-inf_γ_σ(γ), where the infimum ranges over all τ≥0 and W^1,2 loops γ:[0,τ]→ M, γ(0)=γ(τ). If H^1(M;) is non-trivial, the function α is non-negative, convex, superlinear, and satisfies α(0)=0. These properties hold more generally for the alpha function associated to any Tonelli Lagrangian. In the specific case of geodesic flows, we also have the following. The origin is a strict local minimum of Mather alpha function, i.e. α(σ)>0 for all non-zero σ∈ H^1(M;). Consider a non-zero cohomology class σ∈ H^1(M;), and fix any smooth loop γ:[0,τ]→ M, γ(0)=γ(τ) such that ∫_γσ<0. Since this latter integral and the length (γ) := ∫_0^τγ̇(t)_g dt are independent of the parametrization of γ, we can assume that the speed γ̇_g is constant and sufficiently small so that 1/2(γ)γ̇_g<| ∫_γσ|. This implies _σ(γ) = 1/τ( 1/2(γ)γ̇_g - ∫_γσ) < 0, and therefore α([σ])>0 according to (<ref>). Notice that, for the zero cohomology class σ=0, the infimum in (<ref>) is always a minimum, and it is achieved only by the constant curves. Instead, for each non-zero σ∈ H^1(M;), a W^1,2_loc periodic curve γ:→ M of minimal period τ_γ is a σ-minimal closed geodesic if and only if γ|_[0,τ_γ] achieves the minimum in (<ref>). In this case, in particular γ is smooth, and the measure μ_γ associated with γ|_[0,τ_γ] is σ-minimal. Let π:TM→ M be the base projection of the tangent bundle. Mather's graph theorem <cit.> asserts that, for any σ-minimizing measure μ, the restriction π|_(μ) is an injective bi-Lipschitz map onto its image. Moreover, by a theorem due to Carneiro <cit.>, (μ) is contained in the sphere tangent bundle S^rM={v∈ TM | v_g=r}, for r^2/2=α(σ). In particular, any σ-minimal closed geodesic γ has speed γ̇_g≡ r and is simple, i.e. the restriction γ|_[0,τ_γ) is an injective map, where τ_γ>0 is the minimal period of γ. The Riemannian metric g and a closed 1-form σ on M define a Tonelli Lagrangian L:TM→ and a dual Tonelli Hamiltonian H:H→ by L(v)=12 v_g^2 - σ(v), H(p)=12p+σ_g^2. These functions are related by the Fenchel inequality H(p)+L(v)≥ p(v). A theorem due to Fathi and Siconolfi <cit.>, asserts that there exists a C^1 function u:M→ satisfying the Hamilton-Jacobi inequality H∘ du≤α(σ). §.§ Proofs of the theorems Before carrying out the proof of Theorem <ref>, for the reader's convenience we first provide the short proof of a theorem due to Mañé <cit.> in the special case of geodesic flows, which we will need to derive Corollary <ref>. We say that a closed geodesic γ of (M,g) has non-zero real homology when [γ|_[0,τ_γ]]≠0 in H_1(M;). Let (M,g) be a closed Riemannian manifold. If there exists a non-zero σ∈ H^1(M;) not admitting any σ-minimal closed geodesic, then there exist infinitely many closed geodesics of arbitrarily large length and non-zero real homology. Let μ be a σ-minimal measure. By Poincaré recurrence theorem and Birkhoff ergodic theorem, there exists v∈(μ) that is recurrent for the geodesic flow ϕ^t and regular for the Birkhoff average. Namely, there exists a sequence of positive real numbers τ_n→∞ such that ϕ^τ_n(v)→ v, and lim_n→∞1/τ_n∫_0^τ_n F(ϕ^t(v)) dt = ∫_TM F dμ, ∀ F∈ L^1(TM,μ). We fix a quantity δ>0 such that δv_g is smaller than the injectivity radius (M,g), and consider the geodesic arc η_n:[0,τ_n-δ]→ M, η_n(t)=π(ϕ^t(v)). For all n large enough, there exists a unique geodesic arc ζ_n:[0,δ]→ M of length smaller than (M,g) joining η_n(τ_n-δ) and η_n(0)=π(v). Notice that the action _σ(ζ_n) is uniformly bounded from above for all n. The concatenation η_n*ζ_n:[0,τ_n]→ M is a loop with action _σ(η_n*ζ_n) = (τ_n-δ)_σ(η_n) +δ_σ(ζ_n)/τ_n_n→∞_σ(μ)=-α(σ)<0. Let Ω_n be the space of W^1,2 loops ζ:[0,τ_n]→ M, ζ(0)=ζ(τ_n), and γ_n∈Ω_n a loop that minimizes _σ|_Ω_n, i.e. _σ(γ_n)≤_σ(ζ), ∀ζ∈Ω_n. In particular, _σ(γ_n)≤_σ(η_n*ζ_n). Each γ_n is either a constant curve (with action _σ(γ_n)=0) or a closed geodesic (namely a geodesic loop such that γ̇_n(0)=γ̇_n(τ_n)≠0). Up to extracting a subsequence, the probability measure μ_γ_n converges in the weak-* topology to an invariant probability measure ν, and so do the corresponding actions _σ(γ_n)→_σ(ν). By (<ref>) and (<ref>), we infer _σ(ν)≤_σ(μ), and therefore _σ(ν)=_σ(μ). Namely, ν is a σ-minimal measure. Since σ≠0, Lemma <ref> implies that α(σ)>0. By ⟨σ,ρ(ν) ⟩ = _0(ν) - _σ(ν) = _0(ν) + α(σ) ≥α(σ) > 0, we infer that ρ(ν)≠0. Since by assumption there are no σ-minimal closed geodesics, we have the strict inequality _σ(γ_n)>_σ(ν). This, together with the convergence _σ(γ_n)→_σ(ν), implies that the family γ_n, for n≥0, contains infinitely many closed geodesics. Since [γ_n]→ρ(ν)≠0, the closed geodesics γ_n have non-zero real homology for all n large enough. Since the support of ν is contained in the sphere tangent bundle S^rM for r^2/2=α(σ), the weak-* convergence μ_γ_n→ν implies that γ̇_n_g→ r. Let τ_γ_n≤τ_n be the minimal period of the closed geodesic γ_n, which is the minimal positive number such that γ_n(0)=γ_n(τ_γ_n) and γ̇_n(0)=γ̇_n(τ_γ_n). The sequence τ_γ_n must diverge, for otherwise γ_n would converge to a σ-minimal closed geodesic. Therefore the lengths (γ_n)=τ_γ_nγ̇_n_g diverge. We will infer our main Theorem <ref> from the following statement, which under the same assumptions provide a (not necessarily transverse) homoclinic after an explicit conformal perturbation of the Riemannian metric. The transversality of the homoclinic will then be achieved by invoking a perturbation result of Petroll <cit.>. Let (M,g_0) be a closed Riemannian manifold of dimension at least two, with a minimal closed geodesic γ. Let ρ:M→[0,∞) be any smooth function such that ρ(x)=0 and d^2ρ(x)[v,v]>0 for all x∈γ and v∈ T_xM∖{0} orthogonal to γ, and ρ(y)>0 for all y∈ M∖γ. Then γ is a hyperbolic closed geodesic of e^ρ g_0 with a homoclinic. Let γ be a (g_0,σ)-minimal closed geodesic, and as usual we denote by τ_γ>0 its minimal period. In particular, γ is a simple closed geodesic without conjugate points. We set x_0:=γ(0)=γ(τ_γ). For each integer n≥1, we will see the loop γ|_[0,nτ_γ] as a representative of an element of the fundamental group π_1(M,x_0). By the already mentioned theorem of Fathi and Siconolfi <cit.>, there exists a C^1 function u:M→ that satisfies the Hamilton-Jacobi inequality H_0∘ du≤α_g_0(σ), where H_0:T^*M→ is the Tonelli Hamiltonian dual to the Tonelli Lagrangian L_0:TM→, L_0(v)=12 v_g_0^2 - σ(v). This, together with the Fenchel inequality H_0(p)+L_0(v)≥ p(v), implies F_0(v):=L_0(v) - du(π(v))v + α_g_0(σ)≥0, ∀ v∈ T_xM, where π:TM→ M is the base projection. Since ∫_0^τ_γ F_0(γ̇(t)) dt = τ_γ( _g_0,σ(γ) + α_g_0(σ) ) = 0, we have F_0∘γ̇≡0. Let ρ:M→[0,∞) be any smooth function such that ρ(x)=0 and d^2ρ(x)[v,v]>0 for all x∈γ and v∈ T_xM∖{0} orthogonal to γ, and ρ(y)>0 for all y∈ M∖γ. By Proposition <ref>, γ is a hyperbolic closed geodesic for the Riemannian metric g:=e^ρ g_0. Since _g,σ≥_g_0,σ and _g,σ(γ)= _g_0,σ(γ), we infer that γ is a (g,σ)-minimal closed geodesic, and therefore α:=α_g_0(σ)=α_g(σ)=-_g,σ(γ)>0. We introduce the non-negative continuous Tonelli Lagrangian F:TM→[0,∞), F(v):=12 v_g^2 - σ(v) - du(x)v + α. Notice that F is identically equal to α>0 along the zero section, and F(v)>0 for all v∈ TM such that π(v)∉γ. Moreover, for each t∈[0,τ_γ], we have F(rγ̇(t))=0 if and only if r=1. For each compact interval [τ_1,τ_2]⊂ and for each W^1,2 curve ζ:[τ_1,τ_2]→ M, we set a(ζ) := ∫_τ_1^τ_2 F(ζ̇(t)) dt≥0. Notice that a(γ|_[0,τ_γ])=0. The following lemma is crucial, and requires two distinct proofs for the cases π_1(M,x_0)≇ and π_1(M,x_0)≅. From now on, all the geodesics will be associated to the Riemannian metric g, unless we specify otherwise. Two geodesics are said to be geometrically distinct when their images into the Riemannian manifold are distinct. There exists a geodesic ζ:→ M geometrically distinct from γ such that ∫_-∞^∞ F(ζ̇(t)) dt < ∞. Postponing the proof of this lemma, let us first complete the proof of Theorem <ref>. We shall show that ζ is a homoclinic to the closed geodesic γ. For each ϵ>0, we denote by N_ϵ⊂ M the open tubular neighborhood of γ of radius ϵ>0, measured with respect to the Riemannian metric g. Since F is strictly positive outside TN_ϵ and coercive, in particular we have δ_ϵ:=min_T(M∖ N_ϵ) F > 0. Assume that, on some interval [t_1,t_2]⊂, the geodesic arc ζ|_[t_1,t_2] crosses the shell N_2ϵ∖ N_ϵ, so that it has length ζ̇_g(t_2-t_1)≥ϵ and action a(ζ|_[t_1,t_2]) ≥ (t_2-t_1)δ_ϵ≥ϵ δ_ϵ/ζ̇_g=:ρ_ϵ. Since F is continuous and non-negative, there exists s_0>0 such that a(ζ|_[-s_0,s_0]) > ∫_-∞^∞ F(ζ̇(t)) dt -ρ_ϵ, and s_1>s_0 such that F(ζ(s_1))<δ_ϵ. Therefore ζ(s_1)∈ N_ϵ. The inequalities (<ref>) and (<ref>) imply that ζ(t)∈ N_2ϵ for all t>s_1. Analogously, ζ(-t)∈ N_2ϵ for all t>0 large enough. Overall, by sending ϵ→0, this argument shows that the distance of ζ(t) to the closed geodesic γ tends to 0 as |t|→∞. Therefore, ζ̇ must have the α-limit and ω-limit ζ̇=r_αγ̇, ζ̇=r_ωγ̇, where |r_α|=|r_ω|=ζ̇_g/γ̇_g. Since F(rγ̇(t))>0 for all t∈[0,τ_γ] and r≠1, the finiteness of the integral (<ref>) implies r_α=r_ω=1. Therefore ζ̇=ζ̇=γ̇, that is, ζ is a homoclinic to γ. Let N be an open tubular neighborhood of the simple closed geodesic γ. We denote the inclusion by i:N↪ M. Since N is homotopy equivalent to a circle, it has fundamental group π_1(N,x_0)≅. Since π_1(M,x_0)≇ and H^1(M;)≠0, the homomorphism i_*:π_1(N,x_0)→π_1(M,x_0) is not surjective (indeed this latter condition would be enough to carry out the remaining of the proof). We set G:=i_*(π_1(N,x_0)), and fix a homotopy class h∈π_1(M,x_0)∖ G. For each T>0, consider the loop space Ω_T := {ζ:[0,τ]^W^1,2 M | ζ(0)=ζ(τ)=x_0, 0<τ≤ T, [ζ]∈ GhG }. Namely, Ω_T consists of those loops based at x_0, defined on an interval of length at most T, and representing a non-trivial element of the fundamental group π_1(M,x_0) of the form [γ]^jh[γ]^k for some j,k∈. The functional a|_Ω_T achieves its minimum at some geodesic loop ζ_T:[0,t_T]→ M, with 0<t_T≤ T, which is not necessarily unique. We choose one such minimizer with the highest possible period t_T, so that the function T↦ t_T is non-decreasing. We fix a constant c>0 large enough so that F(v)≥14v_g^2-c for all v∈ TM, and therefore a(ζ_T)≥ t_T(14ζ̇_T_g^2-c). Since the function T↦ a(ζ_T) is non-increasing, we have a_∞ := lim_T→∞ a(ζ_T)<∞, and for all T≥1 we have 1/4ζ̇_T_g^2 ≤a(ζ_T)/t_T+c≤a(ζ_1)/t_1+c. Since [ζ_T]∈ G h G and h∉G, we have that [ζ_T]∉G. Therefore, there exists s_T such that ζ_T(s_T)∉N. The uniform bound (<ref>) allows us to extract a diverging sequence T_n→∞ such that, if we set ζ_n:=ζ_T_n, s_n:=s_T_n, and t_n:=t_T_n, we have x_n:=ζ_n(s_n)→ x, v_n:=ζ̇_n(s_n)→ v. Let ζ:→ M be the geodesic such that ζ(0)=x and ζ̇(0)=v. We claim that lim_n→∞min{s_n,t_n-s_n}→∞. Assume by contradiction that s_n is uniformly bounded from above. Up to extracting a subsequence, we have s_n→ s>0. Since a(γ|_[0,τ_γ])=0, we have a(γ|_[0,τ_γ]*ζ_n|_[0,s_n])=a(ζ_n|_[0,s_n]), where * denotes the concatenation of paths. Notice that γ|_[0,τ_γ]*ζ_n|_[0,s_n] is not a geodesic, since it has a corner at γ(τ_γ)=ζ_n(0). For each ϵ>0, we introduce the space Υ_n,ϵ:={λ:[-ϵ,ϵ]^W^1,2M | λ(-ϵ)=γ(τ_γ-ϵ), λ(ϵ)=ζ_n(ϵ) }. Since the geodesic arcs ζ_n|_[0,s_n] converge to ζ(·-s)|_[0,s] in the C^∞ topology on every compact subinterval of [0,s), we can fix ϵ∈(0,τ_γ) small enough so that a|_Υ_n,ϵ has a unique minimizer λ_n, which is a geodesic arc contained in the tubular neighborhood N, and we have δ:=inf_n∈( a(γ|_[τ_γ-ϵ,τ_γ]*ζ_n|_[0,ϵ])-a(λ_n) ) >0. The concatenation κ_n:=γ|_[0,τ_γ-ϵ]*λ_n*ζ_n|_[ϵ,t_n]∈Ω_t_n+τ_γ represents the same element of the fundamental group as γ*ζ_n, and therefore [κ_n]= [γ][ζ_n]∈ GhG. However, if n is large enough so that |a(ζ_n)-a_∞|<δ, we have a(κ_n) ≤ a(ζ_n)-δ < a_∞, which contradicts the fact that min a|_Ω_t_n+τ_γ≥ a_∞. This proves that s_n→∞, and an analogous argument implies that t_n-s_n→∞. For each s>0, we have a(ζ|_[-s,s]) = lim_n→∞ a(ζ_n|_[s_n-s,s_n+s]) ≤lim_n→∞ a(ζ_n) =a_∞, and therefore ∫_-∞^∞ F(ζ̇(t)) dt ≤ a_∞. Since M has dimension at least two and fundamental group π_1(M,x_0)≅, there exists a minimal integer k≥1 such that the higher homotopy group π_k+1(M,x_0)≠0 is non-trivial. Indeed, otherwise any continuous map β:S^1→ M representing a generator of π_1(M,x_0) would be a homotopy equivalence, whereas a closed manifold of dimension at least two cannot be homotopy equivalent to a manifold of dimension one. In order to simplify the notation, we can assume without loss of generality that the simple closed geodesic γ has unit speed γ̇_g≡1 and minimal period τ_γ=1. Let τ be a positive integer that we will fix soon. For each integer n≥1, we set γ_n:=γ|_[0,nτ], and consider the based and free loop spaces Ω_n := {ζ:[0,nτ]^W^1,2 M | ζ(0)=ζ(nτ)=x_0}, Λ_n := {ζ:[0,nτ]^W^1,2 M | ζ(0)=ζ(nτ)}. The concatenation with γ_1 defines a homotopy equivalence i_n:Ω_n →Ω_n+1, ζ↦γ_1*ζ. We denote by j_n:Ω_n↪Λ_n the inclusion. A topological result of Bangert and Hingston <cit.> implies that, for a suitable value of the integer τ, there exist non-trivial homotopy classes h_n∈π_k(Ω_n,γ_n) such that h_n+1=i_n*h_n, and their images q_n:=j_n*h_n∈π_k(Λ_n,γ_n) are nontrivial as well. We fix a basepoint z_0 in the unit sphere S^k. The representatives of h_n are continuous maps of pointed spaces of the form Γ:(S^k,z_0)→(Ω_n,γ_n), and analogously the representatives of q_n are continuous maps of pointed spaces of the form Γ:(S^k,z_0)→(Λ_n,γ_n). We define the min-max values b_n := inf_[Γ]=h_nmax a∘Γ, a_n := inf_[Γ]=q_nmax a∘Γ. Since q_n=j_n*h_n, we have b_n≥ a_n. For each representative Γ of h_n, the composition i_n∘Γ is a representative of h_n+1, and since a(γ_1)=0, we have a(i_n∘Γ(z))=a(γ_1)+a(Γ(z))=a(Γ(z)), ∀ z∈ S^k. This implies b_n≥ b_n+1. For each ζ∈Ω_n, we denote by ζ∈Ω_n the same geometric curve parametrized proportionally to arc-length, so that ∫_0^tnτζ̇(s)_g ds = t ∫_0^nτζ̇(s)_g ds, ∀ t∈[0,1]. The map u_n:Λ_n→Λ_n, u_n(ζ)=ζ is continuous and homotopic to the identity (as it was proved by Anosov <cit.>). Moreover, u_n(Ω_n)⊂Ω_n, and we have a(ζ)≥ a(u_n(ζ)) for all ζ∈Λ_n. This shows that in the min-max expressions (<ref>) we can equivalently restrict the infima over maps that further satisfy Γ=u_n∘Γ, that is, such that each loop Γ(z) is parametrized proportionally to arc-length. We fix a constant c>0 large enough so that F(v)≥14v_g^2-c, ∀ v∈ TM. For each ζ∈Λ_n parametrized proportionally to arc-length, since a(ζ)≥ nτ(14ζ̇_g^2-c), we have the a priori bound 14ζ̇_g^2 ≤a(ζ)/nτ+c ≤ a(ζ)+c. Let N⊂ M be an open tubular neighborhood of γ. For each representative Γ of q_n, there exists a point z∈ S^k such that the loop Γ(z) is not entirely contained in N. Indeed, consider the free loop space Υ_n:={ζ:[0,nτ]^W^1,2 N | ζ(0)=ζ(nτ) }. Since N is homotopy equivalent to a circle, the evaluation map :Λ_n→ M, (ζ)=ζ(0) restricts to a homotopy equivalence |_:→ N, where is the connected component of Υ_n containing γ_n. In particular, induces an isomorphism _*:π_k(,γ_n)^≅π_k( N , x_0) ≅{[ , k=1,; 0, . ]. Since ∘ j_n≡ x_0 and q_n=j_n*h_n, we infer that _* q_n = (∘ j_n)_* h_n = 0. Since the homotopy class q_n is non-zero, no representative Γ of q_n cannot have its image contained in . We claim that inf_n a_n > 0. Indeed, let N_0⊂ M be another open tubular neighborhood of γ whose closure is contained in N, and let ρ>0 be the minimum distance from points of ∂ N_0 to points of ∂ N. In particular, any smooth curve that crosses the shell N∖ N_0 must have length at least ρ. Here, the distances and the lengths are measured with respect to the Riemannian metric g. Since F is strictly positive outside N_0 and is coercive, we have f:= min{F(v) | π(v)∈M∖ N_0} > 0. Let Γ=u_n∘Γ be a representative of q_n that is not too far from being optimal, meaning that max_z∈ S^k a(Γ(z))≤ a_n+1. We know that there exists z∈ S^k such that the loop ζ:=Γ(z) is not entirely contained in N. If ζ intersects the smaller tubular neighborhood N_0, then there exists an interval [t_0,t_1]⊂[0,n] such that ζ|_[t_0,t_1] has length ζ̇_g(t_1-t_0)≥ρ and is contained in M∖ N_0; the a priori bound (<ref>), together with (<ref>) and (<ref>), implies t_1-t_0 ≥ρ/ζ̇_g≥ρ/√(4(b_1+1+c)), and therefore a(ζ) ≥ a(ζ|_[t_0,t_1]) ≥ (t_1-t_0)f ≥ρ f/√(4(b_1+1+c)). If instead ζ does not intersect N_0, then a(ζ)≥ n τ f ≥ f. Standard variational methods imply that a_n is a critical value of a|_Λ_n. Therefore a_n=a(ζ_n) for some closed geodesic ζ_n∈Λ_n contained in the connected component of γ_n. In particular, ζ_n is a geodesic loop such that ζ̇_n(0)=ζ̇_n(nτ), and therefore from now on we will see it as an nτ-periodic geodesic ζ_n:→ M. Since a_n>0, we have that ζ_n is geometrically distinct from γ. We now consider the unit-sphere tangent bundle SM = {v∈ TM | v_g=1}. Since the geodesic flow on SM is expansive near the hyperbolic periodic orbit γ̇ (see, e.g., <cit.>), there exists a neighborhood U⊂ SM of γ̇ such that, for each n≥1, there exists t_n∈[0,nτ] such that ζ̇_n(t_n)/ζ̇_n(t_n)_g∉U. Let v_n:=ζ̇_n(t_n) be the corresponding tangent vector. The a priori bound (<ref>) implies that the sequence v_n_g is uniformly bounded from above. We claim that the sequence v_n_g is also uniformly bounded from below by a positive constant. Indeed, since the continuous Tonelli Lagrangian F is strictly positive along the zero section of TM, there exists r>0 small enough so that δ:= min_v_g≤ r F(v)>0. If we had v_n_g≤ r for some integer n> b_1/(δτ), then we would get the contradiction a_n=a(ζ_n)≥δ nτ> b_1 ≥ b_n ≥ a_n. Overall, we obtained a compact interval [r_1,r_2]⊂(0,∞) such that r_1≤v_n_g≤ r_2 for all integers n≥1. Therefore, up to extracting a subsequence, we have v_n→ v_∞ and a_n→ a_∞. If ζ:→ M is the geodesic such that ζ̇(0)=v_∞, then ζ_n(t_n+·)→ζ in the C^∞-topology on every compact set. Since F is non-negative, for each s>0 we have a(ζ|_[-s,s]) = lim_n→∞ a(ζ_n|_[t_n-s,t_n+s]) ≤lim_n→∞ a_n = a_∞, and therefore ∫_-∞^∞ F(ζ̇(t)) dt ≤ a_∞. By Theorem <ref>, there exists a Riemannian metric g_1 arbitrarily C^∞ close to g such that γ is a hyperbolic closed geodesic of (M,g_1) with a homoclinic. A theorem due to Petroll <cit.> implies that there exists a Riemannian metric g_2 that is arbitrarily C^∞ close to g_1 such that γ is a hyperbolic closed geodesic of (M,g_2) with a transverse homoclinic (if M is a surface, the analogous theorem for C^2 perturbations of the Riemannian metric was proved independently by Donnay <cit.>). We now provide a slight generalization of Theorem <ref> that essentially allows its assumptions to be verified by a finite cover of the considered closed Riemannian manifold. Let p:M→ M_0 be a finite covering map of a closed manifold of dimension at least two, and g_0 a Riemannian metric on M_0. If (M,p^*g_0) has a minimal closed geodesic γ, then there exists a Riemannian metric g arbitrarily C^∞-close to g_0 such that γ is a hyperbolic closed geodesic of p^*g, and p(γ) is a hyperbolic simple closed geodesic of g with a transverse homoclinic. The proof is almost identical to the one of Theorem <ref>, except for a few details. Being (p^*g_0,σ)-minimal, the closed geodesic γ:→ M is simple. Namely, γ=γ(τ_γ+·) for some minimal period τ_γ>0, and γ|_[0,τ_γ) is an injective map. We claim that, for each Deck transformation ψ:M→ M, the closed geodesic η:=ψ∘γ is either disjoint from γ or is of the form η=γ(τ+·) for some τ>0. Indeed, assume by contradiction that there exist distinct t_1,t_2∈[0,τ_γ) such that y:=η(t_1)=γ(t_2) but η̇(t_1)≠γ̇(t_2). Since both invariant measures μ_γ and μ_η are (p^*g_0,σ)-minimal, so is their average μ:=12(μ_γ+μ_η). But the tangent vectors η̇(t_1),γ̇(t_2)∈(μ) are based at the same point y, and therefore π|_(μ) is not injective, contradicting Mather's graph theorem <cit.>. This implies that γ_0:=p∘γ is also a simple closed geodesic for the Riemannian metric g_0 (although τ_γ may be a multiple of the minimal period of γ_0). We can now carry out word by word the proof of Theorem <ref>, with the only difference that here we apply Proposition <ref> to the simple closed geodesic γ_0 in the base manifold, and therefore we obtain the conformal factor ρ of the form ρ=ρ_0∘ p, where ρ_0:M_0→[0,∞) is a suitable function vanishing on γ_0 and strictly positive outside γ_0. We end up with a Riemannian metric g_1=e^ρ_0g_0 on M_0 arbitrarily C^∞-close to g_0 such that γ_0 is a hyperbolic simple closed geodesic for the Riemannian metric g_1, and therefore γ is a hyperbolic simple closed geodesic for the Riemannian metric p^*g_1. Instead of vanishing only on γ as in the proof of Theorem <ref>, the function ρ here vanishes on all the images of γ under Deck transformations. Therefore, at the end of the proof, instead of a homoclinic to γ, we obtain a heteroclinic ζ from γ to ψ∘γ, for some Deck transformation ψ:M→ M. Nevertheless, its base projection ζ_0:=p∘ζ is a homoclinic to γ_0. We then conclude the proof by applying Petroll's theorem <cit.> to γ_0, obtaining a Riemannian metric g_2 arbitrarily C^∞ close to g_1 with respect to which γ_0 is a hyperbolic closed geodesic with a transverse homoclinic. For a group G, we denote its derived series by G_n, for n≥0. These groups are defined inductively as G_0=G and G_n+1=[G_n,G_n], where this latter group is the commutator subgroup of G_n. We denote by |G:G_n| the index of the derived subgroup G_n. As in Section <ref>, we denote by ^k(M) the space of smooth Riemannian metrics on a closed manifold M, endowed with the C^k topology. Let M be a closed manifold of dimension at least two such that the index |π_1(M):π_1(M)_n| is infinite for some n≥1. Then, for each 2≤ k≤∞, there exists an open and dense subset of ^k(M) such that every Riemannian metric therein admits infinitely many closed geodesics of arbitrarily large length. Since the fundamental group π_1(M) is finitely generated, if the index |π_1(M):π_1(M)_1| is infinite then the first Betti number (H_1(M;))= (π_1(M)/π_1(M)_1) is non-trivial. Therefore, Corollary <ref> directly follows from Corollary <ref>. Let G_n, n≥0, be the derived series of the fundamental group π_1(M). We have an associated sequence of normal covering spaces ...→ M_2→ M_1→ M_0=M with fundamental groups π_1(M_n)=G_n. The quotient G_n/G_n+1 is the group of Deck transformations of the covering M_n+1→ M_n. Let n≥0 be the minimal integer such that G_n/G_n+1 is infinite, which exists by the assumption of the corollary. Notice that p:M_n→ M is a finite covering. Therefore M_n is a closed manifold, and by our choice of n it has infinite homology group H_1(M_n;)≅ G_n/G_n+1. Since H_1(M_n;) is finitely generated, the first Betti number (H_1(M_n;)) is non-zero. We fix an integer k≥2, and denote by ℐ⊂^k(M) the subspace of those Riemannian metrics g on M having closed geodesics of arbitrarily large length. We need to show that ℐ contains an open and dense subset of ^k(M). We denote by ℋ⊂^k(M) the open subspace of those Riemannian metrics g on M having a hyperbolic closed geodesic with a transverse homoclinic. As we already mentioned in Section <ref>, classical results from hyperbolic dynamics imply that ℋ⊂ℐ. We denote by 𝒜⊂^k(M) the subspace of those Riemannian metrics g on M such that (M_n,p^*g) admits a minimal closed geodesic. Theorems <ref> and <ref> imply that 𝒜⊂ℋ, ^k(M)∖𝒜⊂ℐ. We define ℬ:=^k(M)∖𝒜. We have ℋ∪ℬ = ℋ∪ℬ⊇𝒜∪ (^k(M)∖𝒜) = ^k(M). We thus have an open and dense subset ℋ∪ℬ of ^k(M) that is contained in ℐ. § MAKING CLOSED GEODESICS HYPERBOLIC In this appendix we shall provide a proof of the following statement, which is employed in the proof of Theorem <ref>. We recall that a closed geodesic γ:→ M of minimal period τ_γ>0 is called simple when γ|_[0,τ_γ) is an injective map. Let γ be a simple closed geodesic without conjugate points in a closed Riemannian manifold (M,g) of dimension at least two. Then γ is a hyperbolic closed geodesic with respect to the conformal Riemannian metric e^ρ g, for any smooth function ρ:M→[0,∞) such that ρ(x)=0 and d^2ρ(x)[v,v]>0 for all x∈γ and v∈ T_xM∖{0} orthogonal to γ. Proposition <ref> guarantees that, given a simple closed geodesic γ without conjugate points, there exists an arbitrarily C^∞-small conformal perturbation of the Riemannian metric that makes γ hyperbolic. An analogous result for perturbations with potentials of Tonelli Hamiltonians was proved in <cit.>. §.§ Green spaces Let us recall some basic facts from geodesic dynamics (for the details, we refer the reader to, e.g., <cit.>). Let (M,g) be a closed Riemannian manifold of dimension at least two. We denote its sphere tangent bundle of radius r>0 by S^rM={v∈ TM | v_g=r}, and the geodesic flow by ϕ^t:SM→ SM. The orbits of ϕ^t are of the form ϕ^t(γ̇(0))=γ̇(t), where γ:→ M is a geodesic parametrized with speed γ̇_g≡ r. Without loss of generality, throughout this section we shall always assume that all geodesics are parametrized with speed r=1, and simply write SM=S^1M. Let γ:→ M be a geodesic, so that γ̇(t)=ϕ^t(v) for v=γ̇(0)∈ S_xM. We introduce the vector subspace Z:=dπ(v)^-1⟨ v⟩^⊂ T_v(SM), where π:SM→ M is the base projection, and ⟨ v⟩^⊂ T_xM is the orthogonal complement to v. As it is common, we will identify Z ≡⟨ v⟩^×⟨ v⟩^, J̇(0) ↦ (J(0),∇_t J|_t=0), where J is any Jacobi field orthogonal to γ, and ∇_t J is its covariant derivative with respect to the Levi-Civita connection. We denote by V:=(dπ) the vertical sub-bundle of T(SM). By the identification (<ref>), we have V_v≡{0}×⟨ v⟩^. We assume that γ is without conjugate points, which is equivalent to V_v∩ dϕ^-t(v)V_ϕ^t(v)={0}, ∀ t∈∖{0}. We define the vector subspaces G_t:= dϕ^-t(ϕ^t(v))V_ϕ^t(v)⊂ Z. Since γ is without conjugate points, for each t≠0 the vector subspace G_t is transverse to the vertical V_v. Via (<ref>), we shall always see G_t as a vector subspace of ⟨ v⟩^×⟨ v⟩^, and the transversality with V_v implies that G_t is a graph over the horizontal ⟨ v⟩^×{0}. More precisely, there exist linear symmetric endomorphisms A_t:⟨ v⟩^→⟨ v⟩^, depending smoothly on t∈∖{0}, such that G_t≡graph(A_t). The associated quadratic forms Q_t(w)=g(A_tw,w) are monotone increasing in t∈∖{0}, and we have Q_t≤ Q_-s for all s,t∈∖{0}. Therefore, the limits A_±:=lim_t→±∞ A_t, G_±:=lim_t→±∞ G_t, exist, and we have G_±≡graph(A_±). The associated quadratic forms Q_±(w)=g(A_± w,w) satisfy Q_+≤ Q_-. Pushing forward G_± with the linearized geodesic flow dϕ^t, we obtain the so-called Green bundles of γ, which are well defined even if γ is a closed geodesic. For our purposes, we will only need the vector spaces G_±, which we will call Green spaces. We will refer to the linear maps A_± as to Green endomorphisms. Let us now assume that γ is closed. We recall that γ is said to be hyperbolic when γ̇ is a hyperbolic periodic orbit of the geodesic flow ϕ^t. The proof of Proposition <ref> will require the following special case of a theorem due to Eberlein <cit.> (the general statement actually holds for arbitrary compact invariant subsets of ϕ^t without conjugate points). [Eberlein] A closed geodesic without conjugate points is hyperbolic if and only if its Green spaces satisfy G_-∩ G_+={0}. §.§ The index form Let be the space of orthogonal vector fields Y:→ TM along the geodesic γ, where orthogonal means g(γ̇,Y)≡0. We denote by ⊂ the subspace of orthogonal Jacobi fields, which are those J∈ that satisfy the Jacobi equation ∇_t^2 J+R(J,γ̇)γ̇=0; equivalently, they are those vector fields J:→ TM along γ such that J̇(t)=dϕ^t(v)w, where v=γ̇(0) and w∈ Z (with the notation of the previous subsection). For all τ>0, we consider the index form of the geodesic arc γ|_[0,τ], which is the quadratic form h_τ(Y) = ∫_0^τ(∇_t Y_g^2-g(R(Y,γ̇)γ̇,Y)) dt, ∀ Y∈. We shall need two elementary properties of the index form: (i) For each J∈, we have h_τ(J)=g(∇_t J|_t=τ,J(τ))-g(∇_t J|_t=0,J(0)). (ii) For each J∈ and Y∈ such that J(0)=Y(0) and J(τ)=Y(τ), we have h_τ(J)≤ h_τ(Y). We now employ Eberlein's theorem, together with the index form, to prove the perturbation result stated at the beginning of the section. We set g̃:=e^ρ g. The Riemannian metrics g and g̃ define associated Levi-Civita connections ∇ and ∇̃ and Riemann tensors R and R̃. Along γ, since ρ and dρ vanish identically, we have ∇=∇̃ and g(R̃(w,γ̇(t))γ̇(t),w)-g(R(w,γ̇(t))γ̇(t),w) = -12 d^2ρ(γ(t))[w,w], ∀ w∈⟨γ̇(t)⟩^. By our assumptions on ρ, there exists a constant δ>0 such that 12 d^2ρ(γ(t))[w,w]≥δ w_g^2, ∀ w∈⟨γ̇(t)⟩^. We set v:=γ̇(0), and first consider the Riemannian objects associated with g. For each w∈⟨ v⟩^ and τ≠0, we denote by J_τ,w the Jacobi field along γ such that J_τ,w(0)=w and J_τ,w(τ)=0. Notice that ∇_t J_τ,w|_t=0=A_τw, where A_τ is the symmetric endomorphism of ⟨ v⟩^ converging to the Green endomorphisms A_± as τ→±∞. By property (i) above, the index forms h_τ of γ with respect to g satisfy h_τ(J_τ,w) = {[ - g(A_τ w,w), τ>0,; g(A_τ w,w), τ<0. ]. We denote with a tilde the analogous Riemannian objects with respect to g̃, which satisfy analogous properties. By (<ref>), (<ref>), and property (ii) of the index form, for each τ≥1 we have g(A_τ w,w) = - h_τ(J_τ,w) = - h̃_τ(J_τ,w) + ∫_0^τ12 d^2ρ(γ)[J_τ,w,J_τ,w] dt ≥ - h̃_τ(J_τ,w) + δ∫_0^1 J_τ,w_g^2 dt = g(Ã_τ w,w) + δ∫_0^1 J_τ,w_g^2 dt. As τ→∞, we have J_τ,w→ J_w, where J_w is the Jacobi field such that J_w(0)=w and ∇_t J_w|_t=0=A_+w. We set ϵ:=δ^-1min_ww_g^-2∫_0^1 J_τ,w_g^2 dt >0, where the minimum ranges over all w∈⟨ v⟩^∖{0}. By taking the limit for τ→∞ in (<ref>), we infer g(A_+ w,w) ≥ g(Ã_+ w,w)+ ϵw_g^2. Analogously, for each τ<0, we have g(A_τ w,w) = h_τ(J_τ,w) ≤ h_τ(J̃_τ,w) ≤h̃_τ(J̃_τ,w) = g(Ã_τ w,w), and by taking the limit for τ→-∞ we infer g(A_- w,w)≤ g(Ã_- w,w). The inequalities (<ref>) and (<ref>), together with g(A_+ w,w)≤ g(A_- w,w) mentioned in the previous subsection, imply g(Ã_+ w,w)+ϵw_g^2 ≤ g(Ã_- w,w), ∀ w∈⟨ v⟩^. Therefore the Green spaces G̃_+≡graph(Ã_+) and G̃_-≡graph(Ã_-) have trivial intersection, and Theorem <ref> implies that γ is a hyperbolic closed geodesic for g̃. amsalpha
http://arxiv.org/abs/2407.03049v1
20240703121828
Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing
[ "Dennis J. N. J. Soemers", "Chiara F. Sironi", "Torsten Schuster", "Mark H. M. Winands" ]
cs.AI
[ "cs.AI" ]
Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing Dennis J. N. J. Soemers, Chiara F. Sironi, Torsten Schuster, and Mark H. M. Winands Department of Data Science and Knowledge Engineering, Maastricht University d.soemers@gmail.com, t.schuster@student.maastrichtuniversity.nl, {c.sironi,m.winands}@maastrichtuniversity.nl July 8, 2024 ================================================================================================================================================================================================================================================================================== § ABSTRACT General Video Game Playing (GVGP) is a field of Artificial Intelligence where agents play a variety of real-time video games that are unknown in advance. This limits the use of domain-specific heuristics. Monte-Carlo Tree Search (MCTS) is a search technique for game playing that does not rely on domain-specific knowledge. This paper discusses eight enhancements for MCTS in GVGP; Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations, and Deterministic Game Detection. Some of these are known from existing literature, and are either extended or introduced in the context of GVGP, and some are novel enhancements for MCTS. Most enhancements are shown to provide statistically significant increases in win percentages when applied individually. When combined, they increase the average win percentage over sixty different games from 31.0% to 48.4% in comparison to a vanilla MCTS implementation, approaching a level that is competitive with the best agents of the GVG-AI competition in 2015. § INTRODUCTION General Video Game Playing (GVGP) <cit.> is a field of Artificial Intelligence in games where the goal is to develop agents that are able to play a variety of real-time video games that are unknown in advance. It is closely related to General Game Playing (GGP) <cit.>, which focuses on abstract games instead of video games. The wide variety of games in GGP and GVGP makes it difficult to use domain-specific knowledge, and promotes the use of generally applicable techniques. There are two main frameworks for GVGP. The first framework is the Arcade Learning Environment (ALE) <cit.> for developing agents that can play games of the Atari 2600 console. The second framework is GVG-AI <cit.>, which can run any real-time video game described in a Video Game Description Language <cit.>. This paper focuses on the GVG-AI framework. The GVG-AI framework is used in the GVG-AI Competition <cit.>. Past competitions only ran a Planning Track, where agents were ranked based on their performance in single-player games. In 2016, it is planned to extend this with a 2/N-Player Track, a Learning Track, and a Procedural Content Generation Track. This paper focuses on the Planning Track. Monte-Carlo Tree Search (MCTS) <cit.> is a popular technique in GGP <cit.> because it does not rely on domain-specific knowledge. MCTS has also performed well in GVGP in 2014 <cit.>, which was the first year of the GVG-AI competition, but was less dominant in 2015 <cit.>. This paper discusses and evaluates eight enhancements for MCTS to improve its performance in GVGP: Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations and Deterministic Game Detection. The remainder of the paper is structured as follows. sec:GVGP provides background information on the GVG-AI framework and the GVG-AI competition. MCTS is discussed in sec:MCTS. In sec:Enhancements, the enhancements for MCTS in GVGP are explained. sec:Experiments describes the experiments to assess the enhancements. Finally, the paper is concluded in sec:Conclusion and ideas for future research are discussed. § GVG-AI FRAMEWORK AND COMPETITION In the GVG-AI competition <cit.>, agents play a variety of games that are unknown in advance. Agents are given 1 second of processing time at the start of every game, and 40 milliseconds of processing time per tick. A tick can be thought of as a turn in an abstract game. Every tick, the agent can choose an action to play, and at the end of the tick the chosen action is played and the game state progresses. Every game has a duration of at most 2000 ticks, after which the game is a loss. Other than that, different games have different termination conditions, which define when the agent wins or loses. Every game in GVG-AI contains at least an avatar object, which is the “character” controlled by the agent. Games can also contain many other types of objects. Games in GVG-AI are fully observable and can be nondeterministic. Agents can perform searches and attempt to learn which actions are good using the Forward Model, consisting of two important functions; advance and copy. Given a game state s_t, the advance(a) function can be used to generate a successor state s_t+1, which represents one of the possible states that can be reached by playing an action a. In deterministic games, there is only one such state s_t+1 for every action a, but in nondeterministic games there can be more than one. The copy(s_t) function creates a copy of s_t. This function is required when it is desirable to generate multiple possible successors of s_t, because every call to advance modifies the original state, and there is no undo function. Because the framework supports a wide variety of different games, it is not optimized as well as any framework dedicated to a specific game would be. This means that the advance and copy operations tend to be significantly slower than equivalent functions in individual game implementations. § MONTE-CARLO TREE SEARCH Monte-Carlo Tree Search (MCTS) <cit.> is a best-first search algorithm that gradually builds up a search tree and uses Monte-Carlo simulations to approximate the value of game states. To handle nondeterministic games with probabilistic models that are not exposed to the agent, an “open-loop” <cit.> implementation of MCTS is used. In an open-loop approach, the root node represents the current game state (s_0), every edge represents an action, and every other node n represents the set of game states that can be reached by playing the sequence of actions corresponding to the path from the root node to n, starting from s_0. See fig:MCTSOpenLoopExample for an example. MCTS is initialized with only the root node. Next, until some computational budget expires, the algorithm repeatedly executes simulations. Every simulation consists of the following four steps <cit.>, depicted in fig:MCTSSteps. In the Selection step, a selection policy is applied recursively, starting from the root node, until a node is reached that is not yet fully expanded (meaning that it currently has fewer successors than available actions). The selection policy determines which part of the tree built up so far is evaluated in more detail. It should provide a balance between exploitation of parts of the search tree that are estimated to have a high value so far, and exploration of parts of the tree that have not yet been visited frequently. The most commonly implemented selection policy is UCB1 <cit.>, which selects the successor S_i of the current node P that maximizes Eq:UCT. S_i and P are nodes, which can represent sets of states. UCB1(S_i) = Q(S_i) + C ×√(ln(n_P)/n_i) Q(S_i) ∈ [0, 1] denotes the normalized average score backpropagated through S_i so far (as described below), C is a parameter where higher values lead to more exploration, and n_P and n_i denote the visit counts of P and S_i, respectively. In the Play-out step, the simulation is continued, starting from the last state encountered in the selection step, using a (semi-)random play-out policy. The most straightforward implementation is to randomly draw actions to play from a uniform distribution until a terminal game state is reached. In GVGP, this is typically not feasible, and a maximum play-out depth is used to end play-outs early. In the Expansion step, the tree is expanded by adding one or more nodes. The most common implementation adds one node to the tree per simulation; the node corresponding to the first action played in the play-out step. In this paper, the tree is simply expanded by adding the whole play-out to the tree. The number of simulations per tick tends to be low enough in GVG-AI that there is no risk of running out of memory. Therefore, to keep all information gathered, all nodes are stored in memory. In the Backpropagation step, the outcome of the final state of the simulation is backpropagated through the tree. Let s_T be the final state of the simulation. Next, an evaluation X(s_T) of the state is added to a sum of scores stored in every node on the path from the root node to the final node of the simulation, and the visit counts of the same nodes are incremented. Because it is not feasible to let all simulations continue until terminal states are reached in GVG-AI, it is necessary to use some evaluation function for non-terminal states. A basic evaluation function that is also used by the sample MCTS controllers included in the GVG-AI framework is given by Eq:GVGEval. X(s_T) = 10^7 + score(s_T) s_T is a winning state -10^7 + score(s_T) s_T is a losing state score(s_T) s_T is a non-terminal state score(s_T) is the game score value of a state s_T in GVG-AI. In some games a high game score value can indicate that the agent is playing well, but this is not guaranteed in all games. Finally, the action leading to the node with the highest average score is played when the computational budget expires. § MCTS ENHANCEMENTS FOR GVGP There is a wide variety of existing enhancements for the MCTS algorithm, many of which are described in <cit.>. This section discusses a number of enhancements that have been evaluated in GVGP; Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations, and Deterministic Game Detection. Some are known from existing research, and some are new. §.§ Progressive History and N-Gram Selection Technique Progressive History (PH) <cit.> and N-Gram Selection Technique (NST) <cit.> are two existing enhancements for the selection and play-out steps of MCTS, respectively. The basic idea of PH and NST is to introduce a bias in the respective steps towards playing actions, or sequences of actions, that performed well in earlier simulations. Because the value of playing an action in GVG-AI typically depends greatly on the current position of the avatar, this position is also taken into account when storing data concerning the previous performance of actions. For a detailed description of these enhancements we refer to the original publications <cit.>. §.§ Tree Reuse Suppose that a search tree was built up by MCTS in a previous game tick t - 1 ≥ 0, and an action a_t - 1 was played. The entire subtree rooted in the node corresponding to that action can still be considered to be relevant for the new search process in the current tick t. Therefore, instead of initializing MCTS with only a root node, it can be initialized with a part of the tree built in the previous tick, as depicted in fig:TreeReuse. This was previously found to be useful in the real-time game of Ms Pac-Man <cit.>. This idea has also previously been suggested in the context of GVGP <cit.>, but, to the best of our knowledge, the effect of this enhancement on the performance of MCTS in GVGP has not yet been evaluated. In nondeterministic games, it is possible that the new root (which was previously a direct successor of the previous root) represented more than one possible game state. In the current tick, it is known exactly which of those possible states has been reached. Therefore, some of the old results in this tree are no longer relevant. For this reason, all the scores and visit counts in the tree are decayed by multiplying them by a decay factor γ∈ [0, 1] before starting the next MCTS procedure. Tree Reuse (TR) with γ=0 completely resets the accumulated scores and visit counts of nodes (but still retains the nodes, and therefore the structure of the generated tree), and TR with γ=1 does not decay old results. §.§ Breadth-First Tree Initialization and Safety Prepruning In some of the games supported by the GVG-AI framework, the number of MCTS simulations that can be executed in a single tick can be very small; sometimes smaller than the number of available actions. In such a situation, MCTS behaves nearly randomly, and is susceptible to playing actions that lead to a direct loss, even when there are actions available that do not directly lose the game. Theoretically this problem could be avoided by adjusting the limit of the play-out depth of MCTS to ensure that a sufficient number of simulations can be done. In practice, this can be problematic because it requires a low initial depth limit to ensure that it is not too high at the start of a game, and this can in turn be detrimental in games where it is feasible and beneficial to run a larger number of longer play-outs. We propose to handle this problem using Breadth-First Tree Initialization. The idea is straightforward; before starting MCTS, the direct successors of the root node are generated by a 1-ply Breadth-First Search. Every action available in the root state is executed up to a number M times to deal with nondeterminism, and the resulting states are evaluated. The average of these M evaluations is backpropagated for every successor with a weight equal to a single MCTS simulation. MCTS is only started after this process. When MCTS starts, every direct successor of the root node already has a prior evaluation that can be used to avoid playing randomly in cases with an extremely small number of simulations. The M states generated for every successor are cached in the corresponding nodes, so that they can be re-used in the subsequent MCTS process. This reduces the computational overhead of the enhancement. Safety prepruning, originally used in an algorithm called Iterated Width <cit.>, has been integrated in this process. The idea of safety prepruning is to count the number of immediate game losses among the M generated states for each action, and only keep the actions leading to nodes with the minimum observed number of losses. All other actions are pruned. §.§ Loss Avoidance In GVGP, many games have a high number of losing game states that are relatively easy to avoid. An example of such a game is Frogs, where the avatar is a frog that should cross a road and a river. The road contains trucks that cause a loss upon collision, but can easily be avoided because they move at a constant speed. The river contains logs that also move at a constant speed, which the frog should jump on in order to safely cross the river. An example of a search tree with many losing states is depicted in fig:HighLossDensity. In this example, the rightmost action in the root node is an action that brings the agent back to a similar state as in the root node. In the Frogs game, this could be an action where the frog stays close to the initial position, and does not move towards the road. The (semi-)random play used in the play-out step of MCTS is likely to frequently run into losing game states in situations like this. This leads to a negative evaluation of nodes that do in fact lead to a winning position. This is only corrected when sufficient simulations have been run such that the selection step of MCTS correctly biases the majority of the simulations towards a winning node. With a low simulation count in GVG-AI, MCTS is likely to repeatedly play the rightmost action in fig:HighLossDensity, which only delays the game until it is lost due to reaching the maximum game duration. This problem is similar to the problem of traps <cit.> or optimistic moves <cit.> in (two-player) adversarial games. In those cases, MCTS has an overly optimistic evaluation of some states, whereas in the cases discussed here it has an overly pessimistic evaluation of some states. In <cit.>, it was proposed to integrate shallow minimax searches inside some of the steps of MCTS to improve its performance in game trees with traps or optimistic moves. Using minimax searches to prove wins or losses is difficult in GVGP because games can be nondeterministic, but a similar idea can be used to get less pessimistic evaluations. In this paper, an idea named Loss Avoidance (LA) is proposed for GVGP. The idea of LA is to try to ignore losses by immediately searching for a better alternative whenever a loss is encountered the first time a node is visited. An example is depicted in fig:LossAvoidance. Whenever the play-out step of MCTS ends in a losing game state, that result is not backpropagated as would commonly be done in MCTS. Instead, one state is generated for every sibling of the last node, and only the evaluation of the node with the highest evaluation is backpropagated. All generated nodes are still added to the tree, and store their own evaluation in memory. LA causes MCTS to keep an optimistic initial view of the value of nodes. This tends to work well in the single-player games of GVG-AI, where it is often possible to reactively get out of dangerous situations. It is unlikely to work well in, for instance, adversarial games, where a high concentration of losses in a subtree typically indicates that an opposing player has more options to win and is likely in a stronger position. In an open-loop implementation of MCTS, LA can have a significant amount of computational overhead in game trees with many losses. For instance, in the Frogs game it roughly halves the average number of MCTS simulations per tick. This is because the node prior to the node with the losing game state does not store the corresponding game state in memory, which means that all states generated in the selection and play-out steps need to be re-generated by playing the same action sequence from the root node. In nondeterministic games this process can also lead to finding a terminal state before the full action sequence has been executed again. To prevent spending too much time in the same simulation, the LA process is not started again, but the outcome of that state is backpropagated. §.§ Novelty-Based Pruning The concept of novelty tests was first introduced in the Iterated Width algorithm (IW) <cit.>. In IW, novelty tests are used for pruning in Breadth-First Search (BrFS). Whenever a state s is generated in a BrFS, a novelty measure (described in more detail below) nov(s) is computed. This is a measure of the extent to which s is “new” with respect to all previously generated states. States with a lower measure are “more novel” than states with a higher measure <cit.>. The original IW algorithm consists of a sequence of calls to IW(0), IW(1), etc., where IW(i) is a BrFS that prunes a state s if nov(s) > i. In GVGP, it was found that it is only feasible to run a single IW(i) iteration <cit.>. The best results were obtained with IW(1), and a variant named IW(3/2) (see <cit.> for details). The definition of the novelty measure nov(s) of a state s requires s to be defined in terms of a set of boolean features. An example of a boolean feature that can be a part of a state is a predicate at(cell, type), which is true in s if and only if there is an object of the given type in the given cell in s. Then, nov(s) is defined as the size of the smallest tuple of features that are all true in s, and not all true in any other state generated previously in the same search process. If there is no such tuple, s must be an exact copy of some previously generated state, and nov(s) is defined as n + 1, where n is the number of features that are defined. For example, suppose that in s, at((x, y), i) = true, and in all previously generated states, at((x, y), i) = false. Then, nov(s) = 1, because there is a tuple of size 1 of features that were not all true in any previously generated state. IW(1) prunes any state s with nov(s) > 1. In this paper, Novelty-Based Pruning (NBP) is proposed as an idea to prune nodes based on novelty tests in MCTS. The goal is not to prune bad lines of play, but to prune redundant lines of play. MCTS often generates states deep in the tree before other states close to the root. For instance, the last state of the first play-out is much deeper in the tree than the first state of the second play-out. This is an important difference with the BrFS used by IW. It means that the novelty measure nov(s) of a state s should be redefined in such a way that it not necessarily uses all previously generated states, but only a specific set of states, referred to as the neighborhood N(s) of s. N(s) is the union of four sets of states. The first set consists of the siblings on the “left” side of s. The ordering of the states matters, but can be arbitrary (as in a BrFS). The second set contains only the parent p(s) of s. The third set consists of all siblings of p(s). The fourth set is the neighborhood of p(s). More formally, let s_i denote the i^th successor of a parent p(s_i). Then, N(s_i) is defined as N(s_i) = {s_1, s_2, …, s_i - 1}∪{p(s_i)}∪ Sib(p(s_i)) ∪ N(p(s_i)), where Sib(p(s_i)) denotes the set of siblings of p(s_i). For the root state r, N(r) = Sib(r) = ∅. An example is depicted in fig:NBP_MCTS. Using the above definition of N(s), nov(s, N(s)) is defined as the size of the smallest tuple of features that are all true in s, and not all true in any other state in the set N(s). The novelty tests are used in MCTS as follows. Let n be a node with a list of successors Succ(n). The first time that the selection step reaches n when it is fully expanded, all successors Succ(n) are novelty tested based on a single state generated per node, using a threshold of 1 for the novelty tests (as in IW(1)). The same boolean features are used to define states in GVG-AI as described in <cit.>. Nodes are marked as not being novel if they fail the novelty test. Whenever all successors of a node are marked as not novel, that node itself is also marked as not novel. There are a few exceptions where nodes are not marked. If a state has a higher game score than the parent, it is always considered to be novel. Additionally, states transitioned into by playing a movement action are always considered to be novel in games where either only horizontal, or only vertical movement is available (because these games often require moving back and forth which can get incorrectly pruned by NBP otherwise), and in games where the avatar has a movement speed ≤ 0.5 (because slow movement does not result in the avatar reaching a new cell every tick, and is therefore not detected by the cell-based boolean features). In the selection step of MCTS, when one of the successors Succ(n) of n should be selected, any successor n' ∈ Succ(n) is ignored if it is marked as not novel, unless the average normalized score Q(n) < 0.5. In such cases, the situation is considered to be dangerous and all alternatives should be considered to see if a better position can be found. For the final selection of the move to play in the real game, non-novel nodes are also only considered if the best novel alternative has a normalized average score < 0.5. When the successors Succ(n) have been novelty tested, every node n_i ∈ Succ(n) stores a set of tuples of features that were all true in the states generated for the purpose of novelty testing for the nodes {n}∪ Succ(n). This means that the tuples of features that are true in the neighborhood N(s) of a state s can be reconstructed relatively efficiently by traversing the path from s back to the root, and collecting the tuples in the stored sets. This is the main reason for defining N(s) as described above. Including more states (for instance, the black states in fig:NBP_MCTS) would require also traversing back down the tree to collect more sets of tuples. This could increase the number of nodes that NBP marks as not being novel, but would also be more expensive computationally. This is not a problem in the BrFS of IW, because it can simply store all tuples of features that are all true in any generated state in the same set for the entire search process. Novelty measures are assigned to nodes based on only one state per node. Therefore, given two identical open-loop game trees in nondeterministic games, it is possible that a node in one tree is pruned and the equivalent node in the other tree is not pruned. For this reason, when combining NBP with Tree Reuse, the results of novelty tests on nodes in the first ply below the new root node are reset when reusing the previous tree. This does not entirely remove the influence of nondeterminism on NBP, but close to the root that influence is at least reduced. §.§ Knowledge-Based Evaluations An important problem with MCTS in GVG-AI is that it is often infeasible to find any terminal states, or even states with a change in game score. This means that the evaluation function in Eq:GVGEval often returns the same value for all states generated in the same tick, and MCTS explores the search space and behaves randomly. In this paper, a heuristic evaluation function is proposed that uses knowledge collected during simulations, and distances to objects that could potentially be interesting, to distinguish between states that have identical evaluations according to Eq:GVGEval. The basic idea is not new; some agents in the competition of 2014 used distance-based evaluation functions <cit.>. A similar idea is also described in <cit.>, and extended in <cit.>. The idea discussed here is based on the same intuition, but a number of implementation details are different. Another related idea is described in <cit.>, where MCTS is used to learn which objects are interesting, and a pathfinding algorithm is used to move towards a selected goal. Let X(s_0) denote the evaluation of the current game state s_0, and let X(s_T) denote the evaluation of the final state s_T of a play-out. If X(s_T) = X(s_0), a heuristic evaluation Eval_KB(s_T) is computed and added to X(s_T). For every object type i observed in a game, let d_0(i) denote the distance from the avatar to the closest object of type i in s_0, and let d_T(i) denote the distance from the avatar to the closest object of type i in s_T. These distances are computed using the A* pathfinding algorithm <cit.>. The pathfinding algorithm takes objects of the wall type into account as obstacles. Many games can also contain other objects that block movement, or portals that can be used for teleportation. These objects are not taken into account, because the agent would first need to learn how these objects influence pathfinding. For every object type i, a weight w_i is used to reward or punish the agent for moving to objects of that type. This is done by computing Eval_KB(s_T) as given by Eq:EvalKB, normalizing it to lie in [0, 0.5], and adding it to X(s_T) if otherwise X(s_T) = X(s_0). Eval_KB(s_T) = ∑_i w_i × (d_0(i) - d_T(i)) Object types i with a small absolute weight (|w_i| < 10^-4) are ignored, to save the computational cost of pathfinding. The weights w_i are determined as follows. To motivate exploration, all weights are initialized with positive values (0.1 for NPCs, 0.25 for Movables, and 1 for Resources and Portals), and incremented by 10^-4 every game tick. States s_t generated during the selection or play-out steps of MCTS are used to adjust these weights. Let s_t - 1 denote the predecessor of s_t. Whenever such a state s_t is generated, it is used to update some of the weights w_i. The intuition is that, if X(s_t) ≠ X(s_t - 1), it is likely that some interesting collision event occurred in the transition from s_t - 1 to s_t that caused the change in score. The framework provides access to a set E(s_t) of collision events that occurred in that transition. Every event e ∈ E(s_t) is a collision event between two objects, where one object is either the avatar, or an object created by the avatar (for instance, a missile fired by the avatar), and the other object is of some type i. Let Δ = X(s_t) - X(s_t - 1) denote the observed change in score. For every object type i, a sum Δ_i is kept of all changes in scores observed in state transitions where collision events with objects of type i occurred. Additionally, a counter n_i of event occurrences is kept for every type i, such that the average change in score Δ_i = Δ_i/n_i for collisions with every type can be computed. Whenever an event with an object of type i is observed, w_i is updated as given by Formula <ref>. w_i w_i + (Δ_i - w_i) ×α_i α_i is a learning rate that is initialized to 0.8 for every type, and updated as given by Formula <ref> after updating w_i. α_i max(0.1, 0.75 ×α_i) This idea is similar to using gradient descent for minimizing |Δ_i - w_i|. The main reason for not simply using Δ_i directly is to avoid relying too much on the knowledge obtained from a low number of observed events. §.§ Deterministic Game Detection The idea of Deterministic Game Detection (DGD) is to detect when a game is likely to be deterministic, and treat deterministic games differently from nondeterministic games. At the start of every game, M random sequences of actions of length N are generated. Each of the M sequences is used to advance a copy of the initial game state s_0, with R repetitions per sequence. If any of the M action sequences did not result in equivalent states among the R repetitions for that sequence, the game is classified as nondeterministic. Additionally, any game in which NPCs are observed is immediately classified as nondeterministic. Any other game is classified as deterministic. In this paper, M = N = 5 and R = 3. Many participants in previous GVG-AI competitions <cit.> used a similar idea to switch to a different algorithm for deterministic games (for instance, using Breadth-First Search in deterministic games and MCTS in nondeterministic games). In this paper, DGD is only used to modify MCTS and the TR and NBP enhancements in deterministic games. The Q(S_i) term in Eq:UCT (or the equivalent term in the formula of PH) is replaced by 3/4×Q(S_i) + 1/4×Q̂_max(S_i), where Q̂_max(S_i) is the maximum score observed in the subtree rooted in S_i. This is referred to as mixmax <cit.>. Additionally, TR and NBP are modified to no longer decay or reset any old results. § EXPERIMENTS §.§ Setup The enhancements discussed in this paper have been experimentally evaluated using the following setup. Every experiment was run using six sets that are available in the framework, of ten games each, for a total of sixty different games per experiment. TableGameSets lists the names of the games for every set. Average results are presented for every set of games, and for the total of all sixty games combined. For every game, five different levels were used, with a minimum of fifteen repetitions per level per experiment (leading to a minimum of 750 runs per set). 95% confidence intervals are presented for all results. All games were played according to the GVG-AI competition rules[Revision 24b11aea75722ab02954c326357949b97efb7789 of the GVG-AI framework (https://github.com/EssexUniversityMCTS/gvgai) was used.], on a CentOS Linux server consisting of four AMD Twelve-Core OpteronT 6174 processors (2.2 GHz). §.§ Results In the first experiment, the following benchmark agents are compared to each other; SOLMCTS, MCTS, IW(1), and YBCriber. SOLMCTS is the Sample Open Loop MCTS controller included in the framework. MCTS is our baseline implementation of MCTS, based on the MaastCTS <cit.> agent, which has a number of differences in comparison to SOLMCTS. MCTS expands all nodes for states generated in simulations (as opposed to one node per simulation), C is set to 0.6 in the UCB1 equation (as opposed to C = √(2)), it simulates up to ten actions after the selection step (as opposed to ten steps from the root node), it uses the 1 second of initialization time for running the algorithm (as opposed to not using that time), and it plays the action with the maximum average score (as opposed to the maximum visit count). IW(1) is the Iterated Width-based agent, as described in <cit.>. YBCriber is an IW-based agent with a number of other features, which won the GVG-AI competition at the IEEE CEEC 2015 conference. The results are given in TableBaselines. The experimental data reveals that the baseline MCTS agent outperforms SOLMCTS. IW(1) performs slightly better than MCTS overall, and YBCriber performs much better than the other benchmark agents. In TableBFTI, our MCTS implementation with Breadth-First Tree Initialization and Safety Prepruning (BFTI) is compared to the MCTS implementation without BFTI. The results for MCTS are based on 1000 runs per set, and the results for BFTI on 750 runs per set. BFTI appears to lower the win percentage slightly, but the 95% confidence intervals overlap. The two columns on the right-hand side show the percentage of lost games where the game was terminated before t = 2000 (where t = 2000 is the maximum duration of a game in GVG-AI). BFTI reduces this percentage significantly. Even though it may slightly decrease win percentages, the quality of play in lost games can be considered to be improved; the agent delays a significant number of losses. This may leave more time for other enhancements to find wins. Therefore, BFTI is included in the baseline MCTS agent for the following experiments that evaluate other enhancements individually. This is followed by an experiment with more enhancements combined. TableProgHistNST shows the win percentages obtained by adding Progressive History (PH), N-Gram Selection Technique (NST), or both to the BFTI agent. PH and NST appear to increase the average win percentage, but the confidence intervals overlap. The two combined result in a statistically significant increase. fig:WinPercentagesTreeReuse depicts 95% confidence intervals for the win percentage of the BFTI agent with Tree Reuse (TR), for six different values of the decay factor γ. The confidence interval for BFTI is shaded in grey. TR with γ∈{0.4, 0.6, 1.0} significantly improves the win percentage of BFTI. TableKBELANBP shows the win percentages of adding either Knowledge-Based Evaluations (KBE), Loss Avoidance (LA) or Novelty-Based Pruning (NBP) to the BFTI agent. All three individually show an increase in the average win percentage over BFTI, with KBE giving the largest increase. TableEnhancementsCombined shows the win percentages of a number of variants of MCTS with multiple enhancements combined. “No DGD” is an agent with all enhancements discussed in this paper, except for Deterministic Game Detection (DGD). “No BFTI” is an agent with all enhancements except for BFTI. This is added to test the assumption made earlier that the ability of BFTI to delay games may enable other enhancements to find more wins. The last agent contains all enhancements. In combination with all the other enhancements, DGD significantly improves the win percentage. DGD was found not to provide a significant increase in win percentage when applied to the BFTI, TR (γ = 0.6) or NBP agents without other enhancements (those results have been omitted to save space). Additionally, BFTI appears to increase the win percentage in combination with all other enhancements, whereas TableBFTI shows it appears to decrease the win percentage when other enhancements are absent, but these differences are not statistically significant. § CONCLUSION AND FUTURE WORK Eight enhancements for Monte-Carlo Tree Search (MCTS) in General Video Game Playing (GVGP) have been discussed and evaluated. Most of them have been shown to significantly (95% confidence) increase the average win percentage over sixty different games when added individually to MCTS. All the enhancements combined increase the win percentage of our basic MCTS implementation from 31.0 ± 1.2 to 48.4 ± 1.5. This final performance is relatively close to the win percentage of the winner of the IEEE CEEC 2015 conference; YBCriber, with a win percentage of 52.4 ± 1.3. Many of the discussed enhancements have parameters, which so far have only been tuned according to short, preliminary experiments. These parameters can likely be tuned better in future work to improve the performance. Loss Avoidance (LA) and Novelty-Based Pruning (NBP) as proposed in this paper have binary effects, in that LA backpropagates only one result from multiple generated siblings and NBP classifies nodes as either novel or not novel. Perhaps these can be improved by making them less binary. The overall performance of the agent can also likely be improved by incorporating more features that are commonly seen among the top entries in past competitions, such as the use of influence maps <cit.>. Finally, some of the new enhancements for MCTS, such as LA and NBP, can be evaluated in domains other than GVG-AI. § ACKNOWLEDGEMENT This work is partially funded by the Netherlands Organisation for Scientific Research (NWO) in the framework of the project GoGeneral, grant number 612.001.121. IEEEtran
http://arxiv.org/abs/2407.03023v1
20240703113445
Coulomb Hall drag induced by electron-electron skew scattering
[ "Yonatan Messica", "Dmitri B. Gutman" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Department of Physics, Bar-Ilan University, Ramat Gan, 52900, Israel Department of Physics, Bar-Ilan University, Ramat Gan, 52900, Israel § ABSTRACT We study the influence of spin-orbit interaction on electron-electron scattering in the Coulomb drag setup. We study a setup made of a time-reversal-symmetry broken Weyl-semimetal (WSM) layer and a normal metal layer. The interlayer drag force consists of two components. The first one is conventional and it is parallel to the relative electronic boost velocity between the layers. This part of the drag tends to equilibrate the momentum distribution in the two layers, analogous to shear viscosity in hydrodynamics. In the WSM layer, the shift of the Fermi surface is not parallel to the electric field, due to skew scattering in the WSM. This induces a Hall current in the normal metal via the conventional component of the drag force. The second component of the drag force is perpendicular to the boost velocity in the Weyl semimetal and arises from interlayer e-e skew scattering, which results from two types of processes. The first process is an interference between electron-electron and electron-disorder scattering. The second process is due to the side jumps in electron-electron collisions in an external electric field. Both the parallel and perpendicular components of the drag are important for the anomalous Hall drag conductivity. On the other hand, for the Hall drag resistivity, the contribution from the parallel friction is partially cancelled in a broad temperature regime. This work provides insight into the microscopic mechanisms of Hall-like friction in electronic fluids. Coulomb Hall drag induced by electron-electron skew scattering Dmitri B. Gutman July 8, 2024 ============================================================== § INTRODUCTION The Coulomb drag experiment is an efficient tool for probing properties of two-dimensional conductors. It provides information which is not directly accessible from single-layer measurements <cit.>. In a drag experiment, two layers are placed parallel in close proximity but are electrically isolated from each other. An electric current is driven through one layer (“active” layer), dragging a current in the second layer (“passive” layer) through the interlayer electron-electron interaction. The Coulomb drag has been thoroughly studied in a broad variety of systems, both experimentally and theoretically. In particular, it was studied in the quantum Hall regime <cit.>, systems of bilayer excitons <cit.>, graphene-based materials <cit.> and in the hydrodynamic regime <cit.>. The Coulomb drag in normal metals is well understood and has been analyzed by several theoretical methods, including the Boltzmann equation <cit.>, memory-matrix formalism <cit.> and diagrammatics <cit.>. For normal metals, among other properties, it allows us to quantify the electron-electron (e-e) scattering rates <cit.>. In recent years, the topological properties of materials have introduced a fresh perspective on electronic transport <cit.>. Band topology emerges from the geometry of the Bloch wave functions in the momentum space. The band geometry influences the evolution of an electron wave packet in time, consequently affecting transport <cit.>. One of the most recognizable manifestation of this is the Berry curvature, which induces anomalous processes that do not have a simple semiclassical interpretation, and changes the way the physical observables are expressed. For example, the velocity of electrons is no longer given by the derivative of the single-particle spectrum but acquires an additional term, known as the anomalous velocity <cit.>. Band geometry plays a role in various effects, such as the anomalous Hall effect (AHE) <cit.>, the spin Hall effect <cit.>, and the photogalvanic effect <cit.>. The interplay between band geometry and electron interactions is a rich and interesting problem. The Coulomb drag setup is a convenient platform for studying such effects. In this work, we propose a simple setup which exemplifies Coulomb drag in topological metals. This setup consists of a time reversal symmetry (TRS) broken Weyl semimetal (WSM) layer and a normal metal layer. Having only one layer with non-trivial band topology enables us to trace all anomalous processes to the WSM. Non-interacting WSMs exhibit anomalous processes known as side jumps and skew scattering, occurring in the scattering of electrons off static disorder <cit.>. If TRS is broken, these processes contribute to the AHE <cit.>. It is natural to expect that anomalous processes arise not only in electron-disorder scattering but in all other collision processes, such as electron-phonon and electron-electron scattering. Indeed, it was shown that in e-e scattering, an electron wavepacket acquires a coordinate shift, which can be interpreted as a side jump <cit.>. Furthermore, a skew-scattering contribution to the momentum-conserving e-e collision integral that arises from e-e scattering via an intermediate state was recently computed <cit.>. Despite recent progress, the exploration of band geometry effects in e-e scattering remains largely uncharted territory. Specifically, one may ask: How do these effects manifest in transport properties? The Coulomb drag setup provides a natural platform for addressing this question, as e-e scattering serves as the primary driver of Coulomb drag, rather than being a secondary process as is typically the case. One of our main findings is that e-e skew scattering gives rise to a Hall-like drag force, similar to a Hall viscous response in electron hydrodynamics. It is worth mentioning that e-e skew-scattering processes were anticipated and phenomenologically postulated in the context of spin Hall drag conductivity <cit.>. A related problem of anomalous Hall drag between two layers of 2D massive Dirac fermions was studied in Refs. <cit.>. These works studied a setup where both layers are made of topological materials, giving rise to numerous anomalous processes. Our work focuses on a simpler system, with only one layer being topological. This enables us to acquire a relatively transparent physical picture, and to interpret our results in terms of parallel and perpendicular friction forces. While our work focuses on the regime where the main relaxation mechanism in both layers is due to disorder, let us mention works on the hydrodynamic regime where intralayer e-e scattering is dominant. In that regime, breaking the TRS by an external magnetic field lead to a It is worth mentioning that experimentally, Hall drag resistivity was observed in metals, although in strongly correlated metals (interaction parameter r_s∼10). It is proposed that this should be described by the hydrodynamic reigme. It is worth mentioning that in the hydrodynamic regime where the intralayer electron-electron scattering is the dominant scale, magnetic field has interesting consequences on the Coulomb drag, and may change ρ_xx^D *REFS APOSTOLOV, SHCMIDT*. We emphasize that while we describe the drag with the quantity η^D in this paper, it is to be understood as a discretazation of viscosity. This paper is organized as follows. In Sec. <ref> we briefly review the Hall drag between two normal metals and contrast it with the WSM-normal metal system. We describe the phenomena on the level of a qualitative picture. In Sec. <ref> we define the model and outline the key steps of the microscopic calculation. We identify the parts of the drag response originating from parallel and perpendicular friction. In Sec. <ref> we compute the Hall drag conductivity and resistivity. In Sec. <ref> we summarize the results and give a brief outlook for future directions. The technical details are delegated to the Appendices. A technical appendix includes the detailed of the calculations using the Keldysh formalism. § HALL DRAG FROM THE KINETIC EQUATION: A QUALITATIVE DISCUSSION In this section, we present a qualitative picture of the Coulomb drag from the point of view of kinetic theory. We start with the standard setup of Coulomb drag in normal metals. §.§ Hall drag between two normal metals in a magnetic field Before considering the Hall drag from a WSM, we briefly review the Hall Coulomb drag of a normal metal-metal bilayer system in a magnetic field. We consider the simplest case of a parabolic dispersion ϵ_k=k^2/(2m) and constant relaxation times for both layers (for convenience, we set ħ=k_B=1). In this case, while the Hall drag conductivity σ_xy^D is non-zero, a cancellation leads to a vanishing Hall drag resistivity ρ_xy^D=0. To show this, we start with the Boltzmann equation for the electron distribution functions in constant and spatially uniform fields. In this case, the steady-state distribution function satisfies e(E^l+v_k^l×B^l)·∂ f^l/∂k=I^e-e (l,l̅)[f^l,f^l]+I^dis.(l)[f^l]. Here, the index l∈(a,p) denotes the active and passive layers, e is the electron charge, v_k^l=∂ϵ_k^l/∂k is the electron velocity, E^l,B^l are the electric and magnetic fields in each layer, and I^e-e (l,l̅),I^dis.(l) are the collision integrals corresponding to interlayer electron-electron and electron-disorder scattering, respectively. We assume disorder to be the dominant relaxation mechanism in both layers, so that intralayer electron-electron scattering can be neglected (the opposite case of dominant intralayer e-e scattering corresponds to the hydrodynamic regime, studied in the context of Coulomb drag in a magnetic field in Refs. <cit.>). In the relaxation time approximation, the disorder collision integral reads I^dis.(l)[δ f_k]=-δ f_k^l/τ^l, with δ f_k^l≡ f_k^l-f_0(ϵ_k^l) being the non-equilibrium part of the distribution, (f_0(ϵ) being the Fermi-Dirac distribution) and τ^l describing the momentum relaxation time in the layer. The non-equilibrium part of the distribution functions can be parametrized as a boosted velocity distribution δ f_k^l=-∂ f_0(ϵ_k^l)/∂ϵ_k^lk·u^l, with u^l being the boost velocity. In the active layer, the boost velocity is related to the external fields by (assuming disorder scattering rate to be much faster than interlayer e-e scattering) u^a=eτ^a/mE^a+eτ^a/mE^a×B^a/1+(eB^aτ^a/m)^2. The boosted velocity distribution [Eq. (<ref>)] corresponds to an electric current j^l=en^lu^l, where n^l is the carrier density of the layer. To analyze the Coulomb drag, it is useful to consider the force balance acting on the electrons in each layer. To do so, we multiply the Boltzmann equation [Eq. (<ref>)] by the momentum k and integrate over k <cit.>. This yields -en^lE^l-j^l×B^l=F^l,l̅-m/eτ^lj^l, where F^l,l̅≡(dk)kI_k^e-e (l,l̅) is the momentum transfer rate between the layers due to the interlayer collisions (the drag force). To linear order in the boost velocities, F^l,l̅ is given by F^l,l̅=η^D/d(u^l̅-u^l)=η^D/d1/e(j^l̅/n^l̅-j^l/n^l), where d is the interlayer distance and η^D is a scalar coefficient with dimensions of viscosity[We note that Ref. <cit.> defines a different drag viscosity constant ν_D, which quantifies an interlayer drag force response to the velocity gradients in a single layer. We express the conventional drag force with the coefficient η^D with dimensions of viscosity, conceptualizing drag as a response to the velocity difference along the axis perpendicular to the bilayer system (the direction normal to the layers).]. The drag force F^l,l̅ can be interpreted as a friction force arising from the relative boost velocity between the layers. For the drag resistivity ρ_αβ^D≡ E_α^p/j_β^a, one sets j^p=0 and computes E^p, finding eE^p=-F^p,a=-η^Dj^a/(en^ad). Thus, the resulting voltage in the passive layer is parallel to j^a, and the drag resistivity is purely longitudinal, i.e., ρ_xy^D=0. However, for the drag conductivity σ_αβ^D≡ j_α^p/E_β^a, one sets E^p=0 and computes j^p. In the absence of a magnetic field in the passive layer, j^p aligns with j^a [Eqs. (<ref>) and (<ref>)] and thus a transverse component in j^a creates a corresponding one in j^p, leading to a finite Hall drag conductivity σ_xy^D. The fact that j^p∥j^a implies that the ratio σ_xy^D/σ_xx^D is equal to the Hall ratio of the conductivities of the active layer, σ_xy^a/σ_xx^a. A non-zero magnetic field in the passive layer rotates j^p relative to j^a [Eq. (<ref>)], and the general result is <cit.> σ_xy^D=σ_xx^D∑_l=p,aσ_xy^l/σ_xx^l. In the case of energy-dependent relaxation times τ^l or non-parabolic dispersion, electrons at different energies are boosted with different velocities. Therefore, the momentum-relaxing force due to disorder scattering is no longer given by the rightmost term in Eq. (<ref>), and may exist even in the absence of a current <cit.>. Additionally, Eq. (<ref>) for the drag force is no longer valid. In that case, there is a weak ρ_xy^D∼ T^4 signal in the regime of low temperatures (T≪ v_F/d), which is usually considered. However, we note that for high temperatures (T≫ v_F/d), energy-dependent lifetimes or non-parabolic dispersion lead to ρ_xy^D∼ T, which is the same temperature dependence as of ρ_xx^D in this regime. §.§ Anomalous Hall drag between WSM and a normal metal We now proceed to the case which is the focus of our work, Coulomb drag between a TRS-broken WSM and a normal metal. Due to Onsager's symmetry relations <cit.>, the tensor of the kinetic coefficients is symmetric up to a reversal of the direction of the magnetic field. This implies that the drag conductivity and resistivity tensors are the same regardless of which layer is chosen as the active (passive) layer, up to a to a sign change in the off-diagonal (Hall) components, ρ_xy^D→-ρ_xy^D and σ_xy^D→-σ_xy^D. From now on, we focus on the case where the WSM is chosen as the active layer and the normal metal is the passive layer. First, we discuss the Coulomb drag in this system on a qualitative level. When an electric field E^a acts on the WSM layer, it induces Hall drag current in the passive layer through two mechanisms. The first is due to the transverse component of the Fermi surface shift in the active layer (the component of the boost velocity u^a perpendicular to E^a). It appears due to disorder skew scattering in the WSM, enabled by the broken TRS. The transverse part of u^a induces a corresponding transverse Fermi surface shift in the passive layer. This part of the drag is intuitively clear. The leading interlayer e-e scattering processes drive the two layers to equilibrate their momenta, aligning their boost velocities. We thus call this mechanism parallel friction. This mechanism contributes to the Hall drag conductivity, but not to the Hall drag resistivity, due to the same reasoning as given in the previous section (under the condition of low temperatures, since the WSM spectrum is non-parabolic). The second mechanism of Hall drag arises due to a skew-like term in the interlayer electron-electron scattering rate, originating from the spin-orbit coupling in the WSM. The interlayer skew scattering gives rise to a transverse momentum exchange between the two layers, which we refer to as Hall friction. An important difference between the drag between normal metals and the WSM-normal metal system comes from the anomalous current in the WSM layer. The electric current in the WSM consists of both a normal and an anomalous part, resulting in an angle between the current and the boost velocity in the WSM. Both Hall friction and the anomalous current contribute to a non-zero Hall drag resistivity ρ_xy^D. The summary of this qualitative picture is depicted in Fig. <ref>. In a more technical sense, we can express the points mentioned above as follows. Due to the WSM spinor structure, the interlayer e-e scattering rate acquires a skew-scattering part. Schematically, a representative component of the skew-scattering rate is given by W_k,k_1→k',k_1'^e-e [skew] (p,a)∼(k_1×k_1')·M, where M is a vector parametrizing the TRS-breaking in the WSM, k,k' are the momenta of the passive layer electrons, and k_1,k_1' are the momenta of the WSM electrons. This scattering process results in a drag force perpendicular to the boost velocity of the active layer, ∑_kkI^e-e [skew] (p,a)∝η_H^Du^a×M, with η_H^D being a drag coefficient in the transverse direction. This drag response from skew-scattering collisions resembles Hall viscosity <cit.>, with η_H^D being an anti-symmetric part of a response tensor[In more detail, the viscosity tensor η is a four-index tensor satisfying σ_αβ=η_αβγδ∂ v_δ/∂ x_γ, with σ_αβ being the stress tensor and v being the fluid velocity. The Hall viscosity is anti-symmetric with respect to the exchange of the pairs α,β↔γ,δ. The momentum transfer in Coulomb drag is analogous to a viscous stress response with α=γ=z, z being the axis perpendicular to the layers. The coefficient we denote as η_H^D is thus analogous to the anti-symmetric component (η_zxzy-η_zyzx)/2 of the viscosity tensor.]. We will thereby refer to this part of the friction as Hall friction. In addition, the current in the WSM consists of both a regular and an anomalous part, j^a=j_reg.^a+j_anomal.^a. The part j_reg.^a corresponds to the regular part of the velocity operator, j_reg.^a≡ e∑_n,kf_nk^av_nk^a=e∑_n,kf_nk^a∂ϵ_nk^a/∂k, where n denotes the band index. For a boosted velocity distribution, j_reg.^a is given by Eq. (<ref>). The anomalous part of the current can be attributed to the off-diagonal (in band space) elements of the velocity operator. In the semiclassical language, these can be taken into account as corrections to the velocity operator from the Berry curvature and side jumps (known as intrinsic and extrinsic velocities, respectively <cit.>), such that (see Appendix <ref> for more details) j_anomal.^a≡ e∑_n,kf_nk^a(v_nk^int.+v_nk^ext.). Importantly, the intrinsic part of the current (corresponding to v_nk^int.) is a thermodynamic contribution which remains finite even in the absence of a Fermi-surface, as in the case of a Chern insulator <cit.>. On the other hand, Coulomb drag arises from real transitions on the Fermi surface due to e-e scattering, and thus vanishes in the presence of a gap <cit.>. We now proceed to the specific model and microscopic calculations. § MODEL AND OUTLINE OF THE CALCULATION §.§ The model The non-interacting Hamiltonians of the layers are given by H^a =∑_ξ=±1v_F(ξσ+C_ξt̂)·(k-ξΔ_k/2)+V_imp^a, H^p =k^2/2m+V_imp^p. The Weyl Hamiltonian consists of two tilted Weyl nodes with opposite chiralities ξ=±1 separated in momentum space by Δ_k. The constants C_ξ describe the tilt in the nodes, which is essential to break the TRS of the Fermi surface of a single node for a finite Hall drag. We consider C_±=± C with |C|<1, for which the Weyl nodes are known as type-I <cit.>. For simplicity, we take t̂=Δ̂_k=ẑ with ẑ being the axis perpendicular to the layers, so that the AHE in the WSM is in the x-y plane. Electrons in both layers are coupled via the Coulomb interaction, H^e-e=∑ V_αβ,α'β'^e-e,ll'c_α,l^†c_β,l'^†c_β'l'c_α'l, where l,l' are layer indices and α,α',β,β' represent the electron states. The disorder potential V_imp^l in each layer is characterized by a scattering time τ^l. In our analysis, we make the following assumptions: * The interlayer distance and the momentum distance between the Weyl nodes satisfy d≫1/Δ_k. In this regime, interlayer scattering involving internode transitions is negligible, and the total drag is a sum of contributions from the two independent Weyl nodes. * The disorder potential in both layers is characterized by a Gaussian white-noise potential, with no correlation between the layers. * The interlayer e-e scattering time is much longer than the momentum relaxation time due to the disorder. * The interlayer distance is much smaller than the disorder mean free path of both layers, d≪ v_F^lτ^l with v_F^l being the Fermi velocity of layer l (the ballistic limit of the Coulomb drag). * The thickness of the two layers is much smaller than the interlayer distance. In this limit, the Coulomb interaction simplifies to the Coulomb interaction between 2D layers [Eq. (<ref>) in Appendix A]. * The thickness of the WSM is much larger than the interatomic distance, so that momentum sums in the z-axis can be approximated as integrals. * Both layers are weakly interacting Fermi gases at the low temperature limit, i.e., T≪ϵ_F^l (ϵ_F^l being the Fermi energy of layer l). We now proceed to outline the computation of the drag conductivity in the model. §.§ Outline of the microscopic calculation First, we find the distribution function of the active WSM layer in the presence of an electric field E^a, disregarding the interlayer e-e collisions. The solution of this problem (non-interacting AHE in the model of tilted WSM) is known <cit.>. Here we compute the matrix (in band space) distribution function <cit.> via the Keldysh formalism (see Appendix <ref>). The off-diagonal elements of the Keldysh distribution function of the WSM are small in the dimensionless parameter 1/(ϵ_F^aτ^a). This enables us to express the off-diagonal matrix elements in terms of the diagonal ones, and consequently to write the e-e collision integral as a functional of only the semiclassical distribution function f_nk^a. Due to disorder skew scattering in the WSM, an electric field E^a shifts its Fermi surface in a direction rotated relative to the field. The correction to the distribution function, δ f_nk^a≡ f_nk^a-f_0(ϵ_nk^a), is given by δ f_nk^a=-∂ f_0/∂ϵ_nk^av_nk^a·(eE^a+τ_nk,∥^a/τ_nk,⊥^aeE^a×ẑ)τ_nk,∥^a, where τ_∥^a and τ_⊥^a are the momentum relaxation times in the directions parallel and perpendicular to the electric field (in our model τ_∥^a/τ_⊥^a≪1, see Appendix <ref> for details). We note that δ f_nk^a accounts for the entire non-equilibrium part of the distribution function, including the part known as the anomalous distribution arising from the side-jump correction to the disorder collision integral[Skew scattering from two adjacent impurities, known as the contribution from crossing diagrams, should also be included in δf^a <cit.>. We neglect the crossing diagrams in this work to simplify the derivation. The inclusion of these diagrams amounts to the renormalization of the skew-scattering rate τ_nk^a,⊥.] <cit.>. Having solved the distribution function in the active layer, we can now solve the Boltzmann equation for the passive layer [Eq. (<ref>)] by substituting the active distribution function in the interlayer collision integral. Setting E^p=0 in Eq. (<ref>) for the calculation of the drag conductivity, the Boltzmann equation for the passive layer reads 0=I^e-e (p,a)[f^p,f^a]+I^dis. (p)[f^p], where we remind that I^e-e (p,a) and I^dis. (p) are the collision integrals due to interlayer e-e scattering and disorder scattering, respectively. Within the relaxation time approximation, the disorder collision integral in the passive layer is given by I^dis. (p)[f^p]=-δ f_k^p/τ^p, where δ f_k^p≡ f_k^p-f_0(ϵ_k^p) is the non-equilibrium part of the distribution function in the passive layer. The interlayer e-e collision integral is given by I_k^e-e (p,a)[f^p,f^a] =-W∑_ξ=±1∑_n_1,n_1'_k',k_1,k_1'[w_k,n_1k_1→k',n_1'k_1'^e-ef_k^pf_n_1k_1^a(1-f_k'^p)(1-f_n_1'k_1'^a) -w_k',n_1'k_1'→k,n_1k_1^e-ef_k'^pf_n_1'k_1'^a(1-f_k^p)(1-f_n_1k_1^a)]. Here, we denote _k≡ dk/(2π)^d for the momentum integrations (d=2,3 for the metal and the WSM layers, respectively). We omit the Weyl node index ξ for objects in the integrand, recalling that we neglect internode scattering. Since disorder scattering is the dominant momentum relaxation mechanism in the passive layer, one may replace in the interlayer collision integral [Eq. (<ref>)], making it a functional of only the active distribution function, I_k^e-e (p,a)[f^a]≡ I_k^e-e (p,a)[f_k^p=f_0(ϵ_k^p),f^a]. The factor W (the WSM layer's thickness) in Eq. (<ref>) is due the quasi-2D nature of the interlayer scattering (see Appendix <ref>). Note that although the collision integral in Eq. (<ref>) looks like a standard e-e collision integral, this is not the case. The complexity is hidden in the interlayer scattering rate w_k,n_1k_1→k',n_1'k_1'^e-e, which is computed taking into account virtual transitions in the WSM (see Appendix <ref> for more details). Such processes are crucial for the Coulomb Hall drag. Among such processes is the interference between interlayer e-e scattering and intermediate disorder scattering, breaking the momentum conservation of the incoming and outgoing electrons. Therefore, one cannot assume k_1+k=k'+k_1' in the integrand of Eq. (<ref>) as is usually the case. Substituting Eq. (<ref>) into Eq. (<ref>), one finds δ f_k^p=τ^pI_k^e-e (p,a)[f^a]. Employing Eq. (<ref>), one finds electric current in the passive layer j^p=e_kv_k^pδ f_k^p. The drag conductivity is given by σ_αβ^D≡j_α^p/E_β^a. For a passive layer with a parabolic spectrum and within the relaxation time approximation, one can relate the drag current to the drag force (or, momentum transfer rate between the layers) F^p,a≡_kkI_k^e-e (p,a). Employing Eqs. (<ref>) and (<ref>), one finds j^p=eτ^p/mF^p,a[f^a]. Thinking of the Coulomb drag in terms of forces gives additional insight. In the experimentally prevalent regime of low temperatures (T≪ T_d, with T_d≡min(v_F^a,v_F^p)/d), the Hall drag conductivity can be divided into two parts[The meaning of the energy scale T_d is as follows <cit.>: the typical scale of momentum transfer in an interlayer collision is determined by the interlayer screening as q∼1/d [see Eq. (<ref>)] For this typical momentum, T_d is the maximum energy allowing a particle-hole excitation in both layers.]: * Parallel friction: Drag force F^p,a which is parallel to the Fermi surface shift in the active layer. Because an electric field E^a in the WSM layer creates a perpendicular component to the Fermi surface shift due to skew scattering in the WSM [second term in Eq. (<ref>)], parallel friction creates a corresponding component in the passive layer current which is perpendicular to E^a, i.e., Hall drag current. * Hall friction: Drag force F^p,a which is perpendicular to the Fermi surface shift in the active layer. This part of the drag arises due to the many-body skew-scattering part of the interlayer collision integral. In the opposite regime of high temperatures (T≫ T_d), the picture is complicated by the energy dependence of the Fermi-surface shift in the active layer [Eq. (<ref>)]. In this case, the drag force can be decomposed into three components: parallel and perpendicular to the Fermi-surface shift as in the previous case, as well as a component related to the energy dependence of the Fermi-surface shift. In this case, even on the level of a simple interlayer collision integral (disregarding the interlayer skew-scattering part), the drag force F^p,a is generally not parallel to the Fermi surface shift. Having outlined the main steps of the calculation, we now turn to the computation of the drag conductivity and resistivity. § RESULTS: DRAG CONDUCTIVITY AND RESISTIVITY First we present the interlayer e-e collision integral in more detail, introducing its skew-scattering part. §.§ Interlayer e-e collision integral The collision integral can be written in the general form of Eq. (<ref>), with the interlayer e-e scattering rate separated into contributions from three different processes, w_k,n_1k_1→k',n_1'k_1'^e-e=w_k,n_1k_1→k',n_1'k_1'^Born+w_k,n_1k_1→k',n_1'k_1'^s.j.+w_k,n_1k_1→k',n_1'k_1'^e-e-imp. The term w^Born refers to the part calculated on the level of the Born approximation within the RPA (random-phase approximation) approach, resulting in a rate proportional to the square of the screened Coulomb potential [Eq. (<ref>) in the Appendix]. It is an even function of the angle between the momenta of the scattering electrons in the WSM. Although it is the largest scattering amplitude, the two other scattering processes are of equal importance to Hall drag, since they give rise to interlayer skew scattering. The term w^s.j. corresponds to a correction due to side jumps of the WSM electrons; w^e-e-imp corresponds to interference between e-e scattering and e-impurity scattering in the WSM. These scattering rates are calculated using the Keldysh formalism, accounting for processes involving virtual transitions in the WSM layer. The virtual transitions correspond to interband elements of the matrix Green functions. The full expressions for these rates are presented in Appendix <ref> [Eq. (<ref>) for w^s.j. and Eq. (<ref>) for w^e-e-imp]. We now briefly describe the physical processes giving rise to the skew-scattering terms. The side-jump process modifies the interlayer e-e collision integral in an analogous way to how it modifies the electron-disorder collision integral <cit.>. In the context of interlayer e-e scattering, a WSM electron acquires a coordinate shift when it scatters from the incoming to the outgoing state (thus, “side jump”). In the presence of an external field, this coordinate shift changes the electric potential energy of the electron. Therefore, the energy conservation condition for the e-e scattering process is modified. Consequently, the side-jump scattering rate w^s.j. is proportional to the applied electric field E^a, and scales linearly with it in the low field limit. Therefore, on the level of the linear response, one replaces the distribution functions in Eq. (<ref>) with their equilibrium values, f_nk^a→ f_0(ϵ_nk^a). Even in this approximation of equilibrium distribution functions, the side-jump scattering rate results in a finite contribution to the collision integral. Next we discuss the last term in Eq. (<ref>), w_k,n_1k_1→k',n_1'k_1'^e-e-imp. It involves scattering through an intermediate state, and is proportional to the complex phase of the Bloch wavefunctions acquired during this scattering. Because this term involves both e-e and disorder scattering, it does not conserve the total electron momenta, unlike the other scattering processes discussed above, which are proportional to the delta function δ_k+k_1-k'-k_1'. We now move on to the calculation of the drag conductivities. §.§ Drag conductivity For the clarity of computation, we focus on two limiting cases: low temperatures (T≪ T_d) and high temperatures (T≫ T_d) (we remind the reader the definition T_d≡min(v_F^a,v_F^p)/d). §.§.§ Low temperatures In the regime of low temperatures (T≪ T_d), the distribution function δ f_nk^a [Eq. (<ref>)] can be approximated by a boosted velocity distribution [Eq. (<ref>)]. This is done by replacing v_nk^a≈(v_F^a)^2k/ϵ_F^a (this misses a term in the z-component of v_nk^a, but we are interested in the components in the x-y plane) and neglecting the energy dependence of the relaxation times. These approximations are justified since the particle-hole scattering is predominantly perpendicular in both layers [i.e., q⊥v_k where q is the momentum exchange in the collision and v_k is the electron velocity, making the collision quasi-elastic. For a detailed discussion, see the text following Eq. (<ref>) of the Appendix]. This accuracy is sufficient to account for the leading part of the drag conductivities in the small parameter T/T_d. We introduce the parametrization δ f_nk^a=-T∂ f_0(ϵ_nk^a)/∂ϵ_nk^ag_nk^a. After substituting Eq. (<ref>) in Eq. (<ref>), one finds g_nk^a=k·u^a/T, where u^a is the boost velocity in the active layer, given by u_α^a=(v_F^a)^2τ_∥^a/ϵ_F^a(δ_αβ+ϵ_αβτ_∥^a/τ_⊥^a)eE_β^a. Here, the momentum relaxation times τ_∥^a,τ_⊥^a are computed at the Fermi energy (see Appendix <ref>). We now substitute the distribution function in the active layer with the non-equilibrium part given by Eq. (<ref>) into the interlayer e-e collision integral [Eq. (<ref>) with the scattering rates given in Eq. (<ref>)] , and derive the linearized interlayer collision integral I_k^e-e (p,a) =-W/T∑_ξ=±1∑_n_1,n_1'_q,k_1,k_1'f_0(ϵ_k^p)f_0(ϵ_n_1k_1^a)(1-f_0(ϵ_k+q^p))(1-f_0(ϵ_n_1'k_1'^a)) ×[w_k,n_1k_1→k+q,n_1'k_1'^Bornq·u^a+w_k,n_1k_1→k+q,n_1'k_1'^e-e-imp(k_1-k_1')·u^a+2w_k,n_1k_1→k+q,n_1'k_1'^s.j.T]. Here, we utilized the momentum conservation of the process corresponding to w^Born, forcing k_1'=k_1-q in this term. We now calculate the drag force F^p,a between the layers by substituting Eq. (<ref>) into Eq. (<ref>). The resulting drag force can be written as a linear response to the boost velocity in the active layer. We identify the generation of diagonal and Hall-like responses, writing F_α^p,a=(δ_αβη_∥^D+ϵ_αβη_H^D)u_β^a/d. Here, the diagonal response η_∥^D is generated by the Born-approximation part of the collision integral, and the Hall-like response η_H^D comes from the many-body skew-scattering processes corresponding to e-e-impurity interference and side jumps[We note that w^s.j. strictly generates a term in the drag force that is proportional to E^a rather than to u^a. We have substituted E^a≈ϵ_F^a/((v_F^a)^2τ^a)u^a in that term, neglecting a further subleading term in 1/(ϵ_F^aτ^a) which is beyond the accuracy of our calculations.]. Calculation of the momentum transfer with the total interlayer scattering rate (see Appendix <ref>) yields η_∥^D =πζ(3)/32T^2/v_F^av_F^pκ^aκ^pd^3, η_H^D =-C/2ϵ_F^aτ^aη_∥^D. Here, ζ(z) is the Riemann Zeta function, and κ^l=2π e^2ν_2d^l/ϵ_r is the Thomas-Fermi wavevector of layer l with 2D density of states ν_2d^l (ν_2d^p≡ν^p for the metal layer and ν_2d^a≡ν^aW for the WSM layer) and dielectric constant ϵ_r. We have thus found the dependence of the drag force on the active layer boost velocity. One can readily find the drag force F^p,a as a function of the electric field in the active layer by substituting Eq. (<ref>) into Eq. (<ref>). The dragged current is proportional to the drag force [Eq. (<ref>)]. Employing Eq. (<ref>), one finds the drag conductivity. For the longitudinal component, one finds σ_xx^D=e^2ℓ^aℓ^p/k_F^ak_F^pη_∥^D/d=e^2πζ(3)/64T^2/ϵ_F^aϵ_F^pℓ^aℓ^p/κ^aκ^pd^4, where ℓ^l≡ v_F^lτ_∥^l is the mean free path in layer l (for the passive layer, τ_∥^p=τ^p). The result for the longitudinal drag conductivity differs from that of the drag between two 2D metals <cit.> by a numerical factor due to the dimensionality of the WSM layer. Next we discuss the Hall drag conductivity. Since the boost velocity in the active layer is rotated relative to the electric field, a corresponding Hall current is dragged in the passive layer by the parallel friction. Denoting the corresponding contribution to the Hall drag conductivity by σ_xy^D[η_∥^D], one finds σ_xy^D[η_∥^D]=τ_∥^a/τ_⊥^aσ_xx^D=3C/2ϵ_F^aτ^aσ_xx^D. The Hall friction gives rise to a force F_α^H≡ϵ_αβη_H^Du_β^a/d perpendicular to the boost velocity. This results in an additional contribution to the Hall drag conductivity σ_xy^D[η_H^D]=e^2ℓ^aℓ^p/k_F^ak_F^pη_H^D/d. Substituting the value of the Hall response coefficient η_H^D, one gets σ_xy^D[η_H^D]=-C/2ϵ_F^aτ^aσ_xx^D. The total Hall drag conductivity is additive in the contributions, σ_xy^D=σ_xy^D[η_∥^D]+σ_xy^D[η_H^D]=C/ϵ_F^aτ^aσ_xx^D. The ratio σ_xy^D/σ_xx^D≃ C/(ϵ_F^aτ^a) is parametrically equal to the ratio between the Fermi-surface contribution of the Hall conductivity and the longitudinal conductivity in the non-interacting WSM. Note that unlike the anomalous Hall conductivity, which has an intrinsic contribution proportional to the momentum distance between the Weyl nodes (σ_xy^int∼Δ_k), the Hall drag conductivity has no bulk contribution <cit.>. §.§.§ High temperatures In the high-temperature limit (T≫ T_d), an additional complication arises due to the deviation of the distribution function in the active layer [Eq. (<ref>)] from a boosted velocity distribution. This deviation is due to the non-parabolic spectrum of the WSM and the energy dependence of the relaxation times. In this case, one needs to replace Eq. (<ref>) with an “energy-dependent boost velocity” ansatz, g_nk^a=k·u^a(ϵ_nk^a)/T, where u^a(ϵ) is a boost velocity at a given energy, analogous to Eq. (<ref>) evaluated at energy ϵ, u_α^a(ϵ)=(v_F^a)^2τ_∥^a(ϵ)/ϵ(δ_αβ+ϵ_αβτ_∥^a(ϵ)/τ_⊥^a(ϵ))eE_β^a. Note that we have neglected the anisotropy in the WSM by approximating u^a(ϵ,k̂)∼u^a(ϵ), taking into account the leading order in the tilt parameter C. By calculating the momentum transfer from the interlayer e-e collision integral as in the previous section, we now find the drag force F_α^p,a=(δ_αβη_∥(0)^D+ϵ_αβη_H^D)u_β^a/d+δ_αβη_∥(1)^Dϵ_F^a/d.∂ u_β^a(ϵ)/∂ϵ|_ϵ=ϵ_F^a. The terms η_∥(0)^D and η_H^D correspond respectively to the Born-approximation and skew-scattering rates as in the previous case (η_∥(0)^D has the same meaning as the non-indexed η_∥^D in the previous section). In addition, there is a term proportional to the energy derivative of the active layer's boost velocity, which was not present in the previous case. This term arises even within the Born-approximation part of the interlayer collision integral. Generally, the vector [∂u^a(ϵ)/∂ϵ]_ϵ=ϵ_F^a is not parallel to u^a, and therefore even in the Born-approximation level, the interlayer collision integral generates momentum transfer perpendicular to u^a. The drag coefficients in the high-temperature and small-tilt (C≪1) limits are given by η_∥(0)^D =η̅Q_1(v_F^p/v_F^a), η_∥(1)^D =η̅Q_2(v_F^p/v_F^a)×min[1,(v_F^p/v_F^a)^2], η_H^D =-C/2ϵ_F^aτ^aη̅Q_3(v_F^p/v_F^a), where we defined η̅^D≡π^3/480T_dT/v_F^av_F^pκ^aκ^pd^3, and Q_1,2,3(z) are factors of order one given in the Appendix [Eqs. (<ref>)-(<ref>)]. Note that η_∥(1)^D [the drag coefficient multiplying ∂u^a(ϵ)/∂ϵ] is suppressed in the limit v_F^p/v_F^a≪1. This comes from the phase-space restrictions of the interlayer e-e scattering. The particle-hole pairs in both layers have to satisfy v_F^l·q=ω, with ω being the energy transfer in the collision. In the case v_F^p/v_F^a≪1, forward scattering is suppressed in the WSM (i.e., scattering with q parallel to the velocity of the WSM electron v_k_1^a). Thus, the effect of the energy dependence of the boost velocity on the drag is negligible in this limit. It is reasonable to assume that the typical energy dependence of the transport times in realistic materials is a power-law function. Consequently, the boost velocity has a power-law energy dependence [see Eq. (<ref>)]. Generally, the parallel and perpendicular (relative to the electric field) components of the boost velocity may have different scaling with energy. We denote these components [first and second terms in Eq. (<ref>)] as u_∥^a(ϵ)∼ϵ^b_∥ and u_⊥^a(ϵ)∼ϵ^b_⊥. The drag conductivities are given by σ_xx^D =e^2ℓ^aℓ^p/k_F^ak_F^pd(η_∥(0)^D+b_∥η_∥(1)^D), σ_xy^D[η_∥^D] =e^2ℓ^aℓ^p/k_F^ak_F^pdτ_∥^a/τ_⊥^a(η_∥(0)^D+b_⊥η_∥(1)^D), and σ_xy^D[η_H^D] still given by Eq. (<ref>). In our model, b_∥=-3 and b_⊥=-2. Thus, the drag force computed within the Born approximation [the terms accounted by η_∥(0)^D and η_∥(1)^D in Eq. (<ref>)] in our model is indeed not parallel to u^a. Since b_∥ and b_⊥ are negative, the two terms in both Eqs. (<ref>) and (<ref>) are of opposite sign. Depending on the numerical prefactors, this may result in an opposite sign for the drag conductivities in the two limits of T≪ T_d and T≫ T_d, and thus lead to a non-monotonous temperature dependence. Physically, the non-monotonous behavior can be understood as follows. When u_α^a(ϵ) is a decreasing function of the energy, quasi-elastic and forward (strongly inelastic) interlayer scattering processes give opposite contributions to the drag force F_α^p,a. Since forward scattering gives a significant contribution only for temperatures T≳ T_d, a non-monotonous temperature dependence of the drag conductivities may arise. The quasi-elastic contribution to the drag is conventional, and its direction depends on the signs of the curvatures of the single-particle spectrum in the layers <cit.>. The sign of the contribution due to forward scattering is controlled by the energy dependence of u_α^a(ϵ), quantified by the coefficients b_∥ and b_⊥ [these are related to the transport times, see Eq. (<ref>)]. We note that the scenario where forward and quasi-elastic interlayer scattering contribute to the drag in opposite directions is quite general. It is expected to occur in a generic Coulomb drag setup, provided that scattering time in one of the layers is a decreasing function of energy. In the case where both layers have energy-dependent scattering times, the behavior is more complicated, since forward scattering can be more dominant in one layer than in the other depending on the spectrum of the two layers. For a scattering event in which both electrons in the two layers scatter in the forward direction, the sign of the resulting contribution to the drag also depends on the product of the derivatives ∂τ^l(ϵ)/∂ϵ. For two identical layers, both scattering mechanisms contribute positively to the drag conductivity. Thus, the two layers being different is essential for a non-monotonous temperature dependence of the drag. To summarize, on a qualitative level, the direction of the drag is controlled by two independent mechanisms: (i) the curvatures of the single-particle spectrum in two layers; (ii) the direction in which the momentum relaxation rate τ(ϵ) changes with energy. The effect (ii) is pronounced only when the temperature is not too low (T≳ T_d). Finally, we numerically calculate the longitudinal and Hall drag conductivities in the entire temperature range, and present the results in Fig. <ref>(a-b),(d-e). For the calculation, we restore physical units and use realistic parameters for the TRS-breaking WSM Co_3Sn_2S_2 <cit.> and GaAs <cit.> as the layers. The non-monotonous temperature dependence of σ_xx^D can be seen from the plots, showing maxima at T∼ T_d [Figs. <ref>(a,d)]. We note that within the analytic approximation for the coefficients η_∥(0)^D and η_∥(1)^D at T≫ T_d [Eqs. (<ref>), (<ref>)], the two terms in σ_xx^D [Eq. (<ref>)] nearly cancel, but their sum is still an increasing function of T. This analytic approximation takes the Coulomb interaction in the limits T≪ϵ_F^a,ϵ_F^p, zero tilt for the WSM, and Thomas-Fermi screening lengths (1/κ^l) much shorter than the interlayer distance d. The small but finite deviations from these limits in the numerical calculation enhance the negative ∼η_∥(1)^D term due to a reduction in the screening at frequencies ω∼ T_d, and consequently result in σ_xx^D being a decreasing function of temperature at T≳ T_d. In our model, the non-monotonic behavior is more prominent (i.e., occurs at a wider parameter range of carrier densities) in σ_xx^D than in σ_xy^D due to the specific values of b_∥ and b_⊥. For n^p=2·10^-11cm^-2, a maximum at T∼ T_d is expected analytically for σ_xy^D as well, as is indeed seen in Fig. <ref>(e) (at higher temperatures, the slope changes again due the temperature approaching ϵ_F^p). Note that for the lowest carrier density plotted (n^p=2·10^-11cm^-2), the results include the temperature range where T≲ϵ_F^p≈83K. Thus, the numerical calculation for that density reveal trends which are beyond our analytic calculation [e.g., additional minimum for σ_xx^D at T≈15K in Fig. <ref>(e)]. §.§ Drag resistivity We now turn to the drag resistivities, which are defined by ρ_αβ^D≡-E_α^p/j_β^a (the minus sign is conventional). We compute these by inverting the generalized conductivity tensor σ_αβ^l,l'≡ j_α^l/E_β^l' [for convenience, in this section we consider the sheet (2D) conductivities and currents of the WSM layer, obtained by multiplying the bulk 3D quantities by the layer thickness]. Focusing on the components α,β∈{ x,y}, σ_αβ^l,l' can be viewed as 4×4 tensor. In these notations, σ_αβ^p,a=σ_αβ^D is the drag conductivity and σ_αβ^l,l is the non-interacting conductivity of layer l. In the leading order in the small parameter (σ_xx^D)^2/(σ_xx^aσ_xx^p), one finds ρ_xx^D =ρ_xx^pρ_xx^aσ_xx^D, ρ_xy^D =ρ_xx^D(σ_xy^D/σ_xx^D-σ_xy^a/σ_xx^a), where ρ_xx^l is the longitudinal (2D) resistivity of layer l. We now analyze these results for the cases of low and high-temperature regimes. §.§.§ Low temperatures For low temperatures (T≪ T_d), the longitudinal drag resistivity [Eq. (<ref>)] is given by ρ_xx^D=η_∥^D/e^2n^an^pd, where n^a and n^p are the (2D) carrier densities of the two layers. This formula represents the longitudinal drag resistivity in terms of the parallel drag coefficient. The analysis of the Hall drag resistivity is more delicate because of a partial cancellation between terms in Eq. (<ref>). Let us separate the non-interacting AHE conductivity into two parts σ_xy^a≡σ_xy^a,reg.+σ_xy^a,int.+ext.vel., corresponding to the contributions from the regular and anomalous parts of the current [Eqs. (<ref>) and (<ref>)], respectively (see Appendix <ref> for detailed expressions). As explained qualitatively in Sec. <ref> and shown in detail in Sec. <ref>, parallel friction drags current which is parallel to the boost velocity of the active layer, leading to σ_xy^D[η_∥^D]/σ_xx^D=σ_xy^a,reg./σ_xx. Therefore, the contributions from these two terms in Eq. (<ref>) cancel each other. Analogous cancellation occurs in the drag resistivity computation for two metals placed in an external magnetic field, resulting in zero ρ_xy^D for that case, as discussed in Sec. <ref>. For our problem, drag between a WSM and a metal, the Hall drag resistivity remains finite due to the Hall friction and the anomalous current. It is given by ρ_xy^D=ρ_xx^D(η_H^D/η_∥^D-σ_xy^a,int.+ext.vel./σ_xx^a)=-ρ_xx^D(1/2v_F^aΔ_k/(ϵ_F^a)^2τ^a+C/ϵ_F^aτ^a), with the last equality valid in the linear order in C. Note that the intrinsic contribution of the AHE does affect the Hall drag resistivity, as is manifested by the term proportional to Δ_k (the momentum separation between the Weyl nodes). §.§.§ High temperatures As explained in Sec. <ref>, in the high-temperature limit (T≫ T_d), the approximation of the active layer distribution function by a boosted velocity distribution is insufficient. As a result, the interlayer drag force is characterized by a more complex response [Eq. (<ref>)]. The drag resistivity tensor in this limit can be readily obtained from Eqs. (<ref>), (<ref>) and the values of the drag conductivities computed in Sec. <ref>. Because the final expressions in this limit are quite cumbersome, we do not write them in full detail here. We do emphasize that the cancellation of the parallel friction mechanism in the Hall drag resistivity no longer occurs, and processes rotating the boost velocity (contributing to the regular part of the AHE conductivity, σ_xy^reg.) do contribute to the Hall drag resistivity. Qualitatively, both ρ_xx^D and ρ_xy^D have a linear temperature dependence. The Hall drag resistivity can be written in a form similar to Eq. (<ref>), ρ_xy^D=-ρ_xx^D(1/2v_F^aΔ_k/(ϵ_F^a)^2τ^a+AC/ϵ_F^aτ^a), with A being a numerical coefficient of order one. Its value is sensitive to the energy dependence of the momentum relaxation times, and thus for the details of the disorder scattering in both layers, see Sec. <ref>. We present numerical results for ρ_xy^D as a function of temperature in Fig. <ref>(c),(f). § SUMMARY AND OUTLOOK We have studied the Coulomb drag in a setup consisting of a TRS-broken WSM and a normal metal. The anomalous kinetics of the WSM enrich the physics, making the problem qualitatively different from the one of drag between normal metals. There are two ways in which the anomalous processes affect the Coulomb drag. The first is due to the anomalous current in the WSM layer, which arises from the interband elements of the WSM velocity operator. Because the anomalous current is not directly related to changes in the occupation of the semiclassical distribution function, the relation between the electric currents in the two layers is not straightforward. This is in contrast to normal metals, where the drag is an equilibration process between the distribution functions in the two layers. Secondly, the interlayer e-e collision integral contains anomalous terms, which originate from virtual interband transitions in the WSM. These terms give rise to a many-body skew-scattering contribution to the interlayer collision integral. In our work, we computed the drag conductivity and resistivity tensors in various temperature regimes. We now summarize the results, starting with the experimentally common regime of low temperatures (T≪ T_d). In this regime, the momentum transfer between the layers can be divided into two parts: 1. Parallel friction. Drag force parallel to the relative boost velocity between the layers, pushing the boost velocities towards equilibration. It is analogous to shear viscosity in hydrodynamics. This part can be computed by taking the interlayer collision integral within the Born-approximation. In the WSM layer, the boost velocity is rotated relative to the electric field (due to disorder skew scattering). Therefore, parallel friction gives rise to a part of the Hall drag conductivity that is proportional to this rotation, σ_xy^D[η_∥^D]=σ_xx^Dτ_∥^a/τ_⊥^a. 2. Hall friction. Drag force perpendicular to the WSM boost velocity u^a. It originates from many-body skew scattering, occurring due to interference between e-e scattering and the electric field or the disorder in the WSM. To account for these processes, one needs to calculate the interlayer collision integral beyond the Born-approximation. Hall friction creates a second contribution to the Hall drag conductivity. In the model of tilted Weyl nodes in the non-crossing approximation, the two contributions partially cancel each other, resulting in a smaller value of σ_xy^D than one would expect from a naive treatment of the interlayer collision integral. The distinction between parallel and Hall friction is more pronounced in the Hall drag resistivity ρ_xy^D. This is because friction parallel to the current does not contribute to the Hall drag resistivity. The Hall drag resistivity is finite due to two factors: (i) the Hall friction, which leads to momentum transfer between the layers which is perpendicular to the WSM boost velocity; (ii) the current in the WSM is not parallel to the boost velocity, due to the anomalous part of the current. This leads to a term in ρ_xy^D that is proportional to the distance between Weyl nodes. In the regime of high temperatures (T≫ T_d), one cannot attribute a single boost velocity to the WSM layer. Instead, one considers an energy-dependent boost velocity u^a(ϵ). The drag force depends on two vectors, u^a(ϵ_F^a) and ∂u^a/∂ϵ|_ϵ=ϵ_F^a. In this case, even the Born-approximation part of the interlayer collision integral leads to a drag force that is not parallel to u^a(ϵ_F^a), giving an additional contribution to ρ_xy^D. Interestingly, the two parts of the drag force are of opposite sign. This causes a non-monotonous temperature dependence of the drag conductivities at a wide range of parameter regimes (the relative contributions depend on the frequency dependence of the interlayer screening). This behavior is quite general and arises due to the energy dependence of the boost velocity through its dependence on the transport times [Eq. (<ref>)]. Thus, we expect non-monotonous temperature behavior of the drag conductivity in Coulomb drag setups with other materials, given that the transport time in one layer is a sufficiently fast-decreasing function of energy. Qualitatively, in both temperature regimes, the temperature and interlayer distance dependences of σ_xy^D follow the same law as σ_xx^D. The ratio between the Hall and longitudinal components of the drag conductivity is given by the small parameter σ_xy^D/σ_xx^D≃ C/(ϵ_F^aτ^a). The same parameter governs the ratio between the Fermi-surface part of the AHE conductivity and the longitudinal conductivity of the non-interacting WSM <cit.>. The numerical prefactor for the drag Hall angle (defined by tanθ_H^D≡σ_xy^D/σ_xx^D) depends on the temperature. The problem we have studied here is closely related to the Hall viscosity in electronic fluids. Indeed, the viscosity tensor in an electronic fluid contains a part that is directly related to the Coulomb drag <cit.>. This part is due to the non-local nature of the Coulomb collision integral, which couples layers in the fluid that move with different velocities. Our study thus reveals a mechanism for the Hall viscosity, stemming from electron-electron skew scattering. We anticipate a similar term to emerge in the viscosity tensor of the electronic fluid in a clean TRS-broken WSM. This question presents a natural direction for future research. We are grateful to Dimitrie Culcer and Igor Gornyi for interesting and valuable discussions. This research was supported by ISF-China 3119/19 and ISF 1355/20. Y. M. thanks the PhD scholarship of the Israeli Scholarship Education Foundation (ISEF) for excellence in academic and social leadership. § ELECTRON-ELECTRON COLLISION INTEGRAL FROM KELDYSH FORMALISM In this Appendix, we derive the interlayer e-e collision integral in the main text [Eq. (<ref>) with the scattering rates in Eq. (<ref>)] using the Keldysh formalism. We follow Ref. <cit.> in calculating the interband elements of the WSM Keldysh Green function, which lead to the skew-scattering part of the collision integral. We first calculate the screened interlayer potential using the RPA approximation <cit.>. In the quasi-2D limit (taking the thickness of the WSM layer to be small compared to the interlayer distance, W≪ d), the screened interlayer potential between the layers is given by <cit.> U_RPA^R(q,ω)=[4π e^2/ϵ_rqWΠ^a,R(q,ω)Π^p,R(q,ω)sinh(qd)+(ϵ_rq/2π e^2+WΠ^a,R(q,ω)+Π^p,R(q,ω))e^qd]^-1, where Π^a(p),R(q,ω) is the retarded polarization operator in the active (passive) layer [Eq. (<ref>)], and ϵ_r is an effective background dielectric constant, which we assume to be uniform in the vicinity of the layers. Note that the quasi-2D Coulomb interaction transfers only 2D momenta, q≡(q_x,q_y). Implicit in Eq. (<ref>) is that the z- coordinate of the Coulomb interaction is not Fourier transformed, i.e., U_RPA^R(q,ω)=U_RPA^R(q,ω,z,z'), with z (z') being at the position of the 2D (3D) layer, such that |z-z'|≈ d. In the ballistic limit of the Coulomb drag (d≫ v_F^lτ^l), and when the interlayer distance is much larger than the inverse of the Thomas-Fermi screening wave vectors of both layers (to be defined shortly), the squared modulus of the interlayer interaction can be approximated by |U_RPA^R(q,ω)|^2=(π e^2q/ϵ_rκ^aκ^psinh(qd))^21-(ω/v_F^pq)^2/(1+ω/2v_F^aqlog|1-ω/(v_F^aq)/1+ω/(v_F^aq)|)^2+π^2/4(ω/v_F^aq)^2. Here, κ^l=2π e^2ν_2d^l/ϵ_r are the Thomas-Fermi screening wave vectors of the layers with 2D density of states ν_2d^l (ν_2d^p≡ν^p for the metal layer and ν_2d^a≡ν^aW for the WSM layer). In Eq. (<ref>), we have replaced the polarization operators of the layers by their zero temperature and ballistic limits <cit.>. In these limits, the result for the polarization operator of the WSM is identical to that of a 3D metal with a matching density of states and Fermi velocity, up to corrections proportional to the tilt parameter C. The e-e collision integral [Eq. (<ref>)] is given in the Keldysh formalism by <cit.> I_k^e-e (p,a) =iW/2_q|U_RPA^R(q,ω)|^2[(f^p(k)-f^p(k+q))Π_^a,K(q,ω) +(2f^p(k+q)f^p(k)-f^p(k+q)-f^p(k))(Π^a,R(q,ω)-Π^a,A(q,ω))]. Here, ω≡ϵ_k+q^p-ϵ_k^p is the transferred energy in the collision, U_RPA^R(q,ω) is the retarded propagator of the screened Coulomb interlayer interaction in the RPA approximation and Π^a,(R,A,K) are the (retarded, advanced, Keldysh) polarization matrices in the active layer. Since q is 2D, here Π^a,(R,A,K)(q,ω)=Π^a,(R,A,K)(q_x,q_y,q_z=0,ω). The factor of the WSM layer thickness W in Eq. (<ref>) is due to one free integration of the interaction U_RPA^R(q,ω,z=0,z') over _d^d+Wdz' (putting the 2D layer at z=0 and the WSM at d<z<d+W). The polarization matrices are given by (omitting the layer index from hereon) Π^R(A)(q,ω) =i/2Tr{_k,ϵ[G_k+q,ϵ+ω^R(A)G_k,ϵ^K+G_k+q,ϵ+ω^KG_k,ϵ^A(R)]} , Π^K(q,ω) =i/2Tr{_k,ϵ[G_k+q,ϵ+ω^KG_k,ϵ^K-(G_k+q,ϵ+ω^R-G_k+q,ϵ+ω^A)(G_k,ϵ^R-G_k,ϵ^A)]} . Note that the WSM layer Green functions are 2×2 matrices in the spinor space. The objects that complicate the collision integral [Eq. (<ref>)] compared to the textbook e-e collision integral are the polarization matrices Π^a,(R,A,K), which acquire contributions from the interband elements of the WSM Green functions. These contributions give rise to skew-scattering terms in the e-e collision integral. In the next subsection, we calculate the interband elements of the WSM Green functions perturbatively in the small parameter 1/(ϵ_Fτ) and in the external electric field. The interband part of the Keldysh Green function is coupled to the intraband part via the kinetic equation. By expressing the interband elements of the Keldysh Green function in terms of the intraband elements, we will be able to present the collision integral as a functional of the semiclassical distribution functions, I_k^e-e (p,a)→ I_k^e-e (p,a)[f^a,f^p]. §.§ Kinetic equation in the Keldysh formalism and corrections to the Green functions We start with briefly introducing the main objects in the Keldysh formalism <cit.>. Consider a general Hamiltonian H≡ H+H', with H being the bare part of the Hamiltonian, given by H(x,t,x',t')≡δ(t-t')(H_0(x-x')+δ(x-x')U_ext(x,t)), where H_0 describes the non-interacting, translation-invariant Hamiltonian (whose Fourier transform determines the energy bands ϵ_nk) and the local field U_ext(x,t) describes the external fields. The part H' includes any additional complications such as disorder and interactions. The bare retarded and advanced Green functions are the inverse of the bare part of the Hamiltonian, [G_0^R]^-1(x,x')=[G_0^A]^-1(x,x')≡δ(x-x')i∂_t-H(x,x'). The Dyson equations for the full retarded (advanced) Green functions read G^R(A)=G_0^R(A)+G_0^R(A)∘Σ^R(A)∘ G^R(A), where Σ^R(A) is the retarded (advanced) self-energy due to the part H' of the Hamiltonian. The information about the state of the system is contained in the Keldysh Green function, which is parametrized by G^K=G^R∘ F-F∘ G^A, where ∘ denotes the convolution operation. From the Dyson equations for G^R,A,K, one obtains <cit.> i(F∘[G_0^A]^-1-[G_0^R]^-1∘ F)=i[Σ^K-(Σ^R∘ F-F∘Σ^A)]. Let us introduce the Wigner-transform (WT), which transforms two-point functions to functions of the center of mass and momentum coordinates, O(x_1,x_2)WT⟶ O(x,k)≡ dx_-e^-ikx_-O(x+x_-/2,x-x_-/2), where x≡(R,T) and k≡(k,ϵ) represent the central point and momentum coordinates, respectively. Under the Wigner transformation, convolutions C≡ A∘ B transform according to the following formula (up to linear order in gradients of the central coordinate x): C(x,k)= A(x,k) B(x,k)+i/2(∂_x A∂_k B-∂_k A∂_x B). Performing the Wigner transform on the Dyson equation (<ref>) results in the quantum kinetic equation ∂/∂ tF-i[F,H]_-+1/2[∂_xF,∂_pH̃]_+-1/2[∂_pF,∂_xH̃]_+=I_F[F], where [A,B]_-(+) denotes the commutator (anti-commutator), H̃≡ H+ℜ[Σ^R] is the Hamiltonian including renormalization effects from the self-energy, and I_F[F] is the collision integral for F, given by I_F[F]≡ i[Σ^K-(Σ^RF-FΣ^A)]. Note that all functions from Eq. (<ref>) onwards are in the Wigner-transform space, i.e., F=F(x,k). For a single band, evaluating Eq. (<ref>) on the energy shell ϵ=ϵ_k+U_ext(x)+ℜ[Σ^R] reduces to the Boltzmann equation for the semiclassical distribution function f(x,k). Omitting renormalization effects (approximating ℜ[Σ^R] as constant), one obtains (∂/∂ t+∇_kϵ_k·∇_R-∇_RU_ext·∇_k)f(x,k)=I_x,k[f], with the collision integral given by I_x,k[f]≡-1/2[I_F]_x,k,ϵ=ϵ_k+U_ext(x), and the semiclassical distribution function related to the on-shell part of F by f(x,k)≡1-F(x,k,ϵ=ϵ_k+U_ext(x))/2. with ϵ̃ being the renormalized energy given by ϵ̃(x,k)≡ϵ_k+V_ext(R,T)+[Σ^R(x,p)], Coming back to the case of interest of a multiple band kinetic equation [Eq. (<ref>)], it is convenient to work in the eigenbasis of the band Hamiltonian, where H_0 is a diagonal matrix with elements ϵ_nk on the diagonal. The trade-off in working in the eigenbasis is that it is generally momentum-dependent, and therefore, derivatives in momentum space generate Berry connections (to be defined shortly). Considering an off-diagonal element in a general matrix ∂ O/∂k_i, simple calculation shows (∂ O/∂ k_i)_nn' ≡⟨ u_nk|∂ O/∂ k_i|u_n'k⟩ =∂/∂ k_i⟨ u_nk|O|u_n'k⟩ -⟨ u_nk|O|∂/∂ k_iu_n'k⟩ -⟨∂/∂ k_iu_nk|O|u_n'k⟩ =∂/∂ k_iO_nn'(k)+i(O A_i- A_iO)_nn', with |u_nk⟩ being the eigenstate of H at momentum k and band n, and A_nn'(k) being the Berry connection, A_nn'(k)≡ i⟨ u_nk|∇_ku_n'k⟩ . In the band eigenbasis, Eq. (<ref>) results in a system of coupled equations for the matrix distribution function F. One can express the off-diagonal elements F_nn' perturbatively in terms of the diagonal elements F_nn in order to obtain decoupled equations for the diagonal elements. In the presence of an external electric field, we obtain the following expression for the off-diagonal element of F (keeping the leading order terms in the gradients): F_nn'=-[1/2 A_nn'(k)(∂ F_n/∂r+∂ F_n'/∂r)+1/ϵ_nk-ϵ_n'k[-∇_rU_ext(x)· A_nn'(k)(F_n-F_n')+i[I_F[F]]_nn']] [n≠ n']. Here, we denote diagonal matrix elements as F_n≡ F_nn for brevity. In the multiband case, the semiclassical distribution function of each band is related to the diagonal component of F in the same manner as in Eq. (<ref>), f_n(x,k)≡[1-F_n(x,k,ϵ=ϵ_nk+U_ext(x))]/2. We note that by substituting Eq. (<ref>) in the diagonal element of the kinetic equation (<ref>), one may obtain the Boltzmann equation for the semiclassical distribution function, including corrections such as the anomalous velocity <cit.>. Since the purpose of this Appendix is only to derive the interlayer collision integral in terms of the semiclassical distribution functions, Eq. (<ref>) is all that we need from the kinetic equation. Next, we calculate the interband elements of the Green functions. Since we choose to include the electric field in the bare part of the Hamiltonian H [Eq. (<ref>)], the bare propagators G_0^R(A) acquire interband elements. In the Wigner coordinates, the diagonal elements of the bare Green functions are given by G_0,n^R(A)(x,k)=1/ϵ-ϵ_nk-U_ext(x)± i0. By requiring G_0∘ G_0^-1=1, we find the off-diagonal correction to the bare Green functions, to the leading order in the gradients, G_E,nn'^R(A)(x,k)≡ G_0,nn'^R(A)(x,k)=- A_nn'(k)·[-∇_rU_ext/ϵ_nk-ϵ_n'k+i0(G_0,n^R(A)(x,k)-G_0,n'^R(A)(x,k)) +1/2∇_r(G_0,n^R(A)(x,k)+G_0,n'^R(A)(x,k))]. Note that the Berry connection has only off-diagonal elements (A_nn(k)=0), allowing us to write Eq. (<ref>) without an explicit (1-δ_nn') factor. The retarded and advanced Green functions also acquire off-diagonal corrections due to the self-energy. To the leading order in the perturbative Hamiltonian H', the correction is given by G_V,nn'^R(A)=G_0,n^R(A)Σ_nn'^R(A)G_0,n'^R(A). In total, the interband corrections to G^R(A) are the sum of the two terms, G_nn'^R(A)≡ G_E,nn'^R(A)+G_V,nn'^R(A). Similarly, we find the interband elements of the Keldysh Green function and write G_nn'^K(x,k)≡ G_E,nn'^K(x,k)+G_V,nn'^K(x,k). Here, G_E,nn'^K(x,k) corresponds to all the off-diagonal terms in Eq. (<ref>) that explicitly contain spatial gradients. These terms come from G_E,nn'^R(A) [Eq. (<ref>)], F_nn' [Eq. (<ref>)], or the gradients generated by the Wigner transformation (e.g., G^R∘ FWT⟶(...)+i[∂_rG^R,∂_kF]_-/2-[r↔ k]). The part G_V,nn'^K arises from the corrections G_V,nn'^R(A) [Eq. (<ref>)] and the last term in Eq. (<ref>) for F_nn' (the term explicitly including I_F[F]). Note that although the perturbative Hamiltonian H' determines the non-equilibrium distribution function through the collision integral and is thus relevant for both terms in Eq. (<ref>), the term G_V,nn'^K accounts for its effect on the propagators themselves, generating virtual interband transitions. The term G_E,nn'^K(x,k) is given by a formula analogous to Eq. (<ref>), G_E,nn'^K(x,k)=- A_nn'(-∇_rU_ext/ϵ_nk-ϵ_n'k+i0(G_0,n^K(k,ϵ)-G_0,n'^K(k,ϵ))+1/2∇_r[G_0,n^K(k,ϵ)+G_0,n'^K(k,ϵ)]), where we defined G_0,n^K≡(G_0,n^R-G_0,n^A)F_n. Let us note that although the expressions in Eqs. (<ref>) and (<ref>) can be simplified by explicitly calculating ∇_rG_0,n^R(A) using Eq. (<ref>), the separation to the two terms turns out to be convenient in the calculation of the drag later on, with the first term giving no contribution. The terms G_V,nn'^R,A,K depend on the perturbating term H'. From now on we focus on the case studied in this work, where the dominant scattering in the WSM is due to Gaussian disorder, so that H' is the disorder potential. The Green functions and self-energies of interest are those averaged over the random disorder configurations. Modeling the disorder by short-ranged dilute scalar impurities at concentration n_imp and strength u_0 (in units of energy times volume), the self-energy is given by, to the leading order in the impurity concentration, Σ_nn'^R(A)(k,ϵ)=n_imp∑_m_k_1V_nm^kk_1G_0,m^R(k_1,ϵ)V_mn'^k_1k, with V_nn'^kk'=u_0⟨ u_nk| u_n'k'⟩ being the matrix element of the impurity potential in Fourier space, whose momentum dependence is only due to inner product of the Bloch wavefunctions. The correlator of the disorder potential averaged over the disorder configurations is given by ⟨ H'(r)H'(r')⟩ _disorder=γδ(r-r') with γ≡ n_impu_0^2. In this case, we find G_V,nn'^R(A)(k,ϵ) =γ∑_m_k_1⟨ u_nk| u_mk_1⟩⟨ u_mk_1| u_n'k⟩ G_0,n^R(A)G_0,m^R(A)G_0,n'^R(A), G_V,nn'^K(k,ϵ) =γ∑_m_k_1⟨ u_nk| u_mk_1⟩⟨ u_mk_1| u_n'k⟩{-F_nG_0,n^AG_0,m^AG_0,n'^A +F_n'G_0,n^RG_0,m^RG_0,n'^R +(1-δ_nn')G_0,n^RG_0,n'^R[F_nG_0,m^A-F_n'G_0,m^R+F_m(G_0,m^R-G_0,m^A)]}. Previous expression, only holds for off-diag part: G_V,nn'^K(k,ϵ) =γ∑_m_k_1⟨ u_nk| u_mk_1⟩⟨ u_mk_1| u_n'k⟩{F_n(k,ϵ)(G_0,n^R(k,ϵ)-G_0,n^A(k,ϵ))G_0,m^A(k_1)G_0,n'^A(k) +F_n'(k,ϵ)(G_0,n'^R(k,ϵ)-G_0,n'^A(k,ϵ))G_0,m^R(k_1)G_0,n^R(k) +F_m(k_1,ϵ)(G_0,m^R(k_1,ϵ)-G_0,m^A(k_1,ϵ))G_0,n^R(k)G_0,n'^A(k)}. Another long stuff: G_V,nn'^R(A)(k,ϵ) =γ∑_m_k_1⟨ u_nk| u_mk_1⟩⟨ u_mk_1| u_n'k⟩ G_0,n^R(A)(k,ϵ)G_0,m^R(A)(k_1,ϵ)G_0,n'^R(A)(k,ϵ), G_V,nn'^K(k,ϵ) =γ∑_m_k_1⟨ u_nk| u_mk_1⟩⟨ u_mk_1| u_n'k⟩{-F_n(k,ϵ)G_0,n^A(k,ϵ)G_0,m^A(k_1,ϵ)G_0,n'^A(k,ϵ) +F_n'(k,ϵ)G_0,n^R(k,ϵ)G_0,m^R(k_1,ϵ)G_0,n'^R(k,ϵ) +(1-δ_nn')G_0,n^R(k,ϵ)G_0,n'^R(k,ϵ)[F_n(k,ϵ)G_0,m^A(k_1,ϵ)-F_n'(k,ϵ)G_0,m^R(k_1,ϵ)+F_m(k,ϵ)(G_0,m^R(k_1,ϵ)-G_0,m^A(k_1,ϵ))]}. Here, all the functions on the RHS are evaluated at energy ϵ, and their momentum argument can be read from the Bloch products (i.e., momentum k for matrix elements of bands n,n' and k_1 for m). We are now ready to evaluate the interlayer collision integral [Eq. (<ref>)], substituting the full Green functions in the polarization operators [Eqs. (<ref>) and (<ref>)]. We separate the contributions coming from the different corrections of the Green functions. §.§ Born-approximation part of interlayer e-e collision integral Taking the diagonal components of the bare Green functions in Eqs. (<ref>) and (<ref>) gives the familiar expressions for the polarization operators <cit.>, [Π^R(q,ω)-Π^A(q,ω)]_0 =π i∑_nn'_k|⟨ u_nk| u_n'k+q⟩|^2δ(ϵ_n'k+q-ϵ_nk-ω)(F_n'(k+q,ϵ_n'k+q)-F_n(k,ϵ_nk)), [Π^K(q,ω)]_0 =-π i∑_nn'_k|⟨ u_nk| u_n'k+q⟩|^2δ(ϵ_n'k+q-ϵ_nk-ω)(F_n'(k+q,ϵ_n'k+q)F_n(k,ϵ_nk)-1). Substituting [Π^R,A,K]_0 in Eq. (<ref>) gives rise to the leading term of the interlayer collision integral, given by Eq. (<ref>) with the Born-approximation interlayer scattering rate w_k,nk_1→k',n'k_1'^Born=2πδ_k+k_1-k'-k_1'δ(ϵ_nk_1^a+ϵ_k^p-ϵ_k'^p-ϵ_n'k_1'^a)|U_RPA^R(q,ω)|^2|⟨ u_nk_1| u_n'k_1'⟩|^2, with q≡k'-k=k_1-k_1' and ω≡ϵ_k'^p-ϵ_k^p=ϵ_nk_1^a-ϵ_n'k_1'^a being the momentum and energy transferred in the collision, respectively. The spinor inner product |⟨ u_nk_1| u_n'k_1'⟩|^2 in the scattering rate is due to the spinor structure of the WSM and suppresses backscattering, similar to graphene <cit.>. §.§ Skew scattering interlayer e-e collision integral Next, we collect all terms in the polarization matrices [Eqs. (<ref>), (<ref>)] that include one off-diagonal element of the Green functions [Eqs. (<ref>), (<ref>)]. Substituting the resulting corrections of the polarization matrices into the collision integral [Eq. (<ref>)] gives rise to skew-scattering contributions. We find the following contributions: 1. Intrinsic. Consider the first term in Eqs. (<ref>) and (<ref>) for the off-diagonal parts of the Green functions, [G_nn'^R,A,K]_int(k,ϵ)≡- A_nn'(k)·eE/ϵ_nk-ϵ_n'k+i0(G_0,n^R,A,K(k,ϵ)-G_0,n'^R,A,K(k,ϵ)). This correction is related to the intrinsic (Berry curvature) mechanism of the AHE, and gives the intrinsic part of the electric current when substituted into the expectation value of the current, j=Tr{ĵG^K}. We find that this correction does not contribute to the interlayer collision integral in the linear response regime. In more detail, collecting all terms in the polarization operators [Eqs. (<ref>), (<ref>)] that contain one off-diagonal element of a Green function taken as G_nn'^R,A,K→[G_nn'^R,A,K]_int gives [Π^R(q,ω)-Π^A(q,ω)]_int =π i∑_n,n'_kδ(ϵ_n'k+q-ϵ_nk-ω)[F_n(k,ϵ_nk)-F_n'(k+q,ϵ_n'k+q)]eE ·[∑_m≠ n1/ϵ_nk-ϵ_mk( A_nm(k)⟨ u_mk| u_n'k+q⟩⟨ u_n'k+q| u_nk⟩ +c.c)+(n,k↔ n',k+q)], [Π^K(q,ω)]_int =π i∑_n,n'_kδ(ϵ_n'k+q-ϵ_nk-ω)[F_n'(k+q,ϵ_n'k+q)F_n(k,ϵ_nk)-1]eE ·[∑_m≠ n1/ϵ_nk-ϵ_mk( A_nm(k)⟨ u_mk| u_n'k+q⟩⟨ u_n'k+q| u_nk⟩ +c.c)+(n,k↔ n',k+q)]. The corrections [Π^R,A,K(q,ω)]_int are of the same form as the bare expressions for the polarization matrices [Eqs. (<ref>), (<ref>)], and lead to a collision integral in the form of Eq. (<ref>) with a renormalized scattering rate. However, the correction to the scattering rate is linear in the electric field, and thus a non-vanishing contribution from the collision integral starts only from the second order of E. 2. Side jump. Next, we consider the second term in Eqs. (<ref>) and (<ref>), [G_nn'^R,A,K(k,ϵ)]_s.j.≡-1/2 A_nn'(k)∇_r[G_0,n^R,A,K(k,ϵ)+G_0,n'^R,A,K(k,ϵ)]. Similarly to the previous part, we collect all the terms in the polarization operators that include one off-diagonal element in one Green function with G_nn'^R,A,K→[G_nn'^R,A,K]_s.j. and a diagonal element in the second Green function. During the algebra, we utilize the following identity, ∑_m≠ n'⟨ u_nk| u_n'k'⟩⟨ u_mk'| u_nk⟩ A_n'm(k')-∑_m≠ n⟨ u_nk| u_n'k'⟩⟨ u_n'k'| u_mk⟩ A_mn(k)/2|⟨ u_nk| u_n'k'⟩|^2 =i⟨ u_n'k'|∇_k'u_n'k'⟩ -i⟨ u_nk|∇_ku_nk⟩ -(∇_k'+∇_k)(⟨ u_n'k'| u_nk⟩)≡ δr_nk,n'k', where δr_n'k',nk denotes the coordinate shift accumulated during a collision from state |u_nk⟩→|u_n'k'⟩ <cit.>. To get from the first line to the second line in Eq. (<ref>), we add and subtract m=n' and m=n to the summations, giving the identity operator. Assuming a spatially uniform semiclassical distribution function, the spatial gradient acts only on G_0^R,A through their dependence on the electric potential [Eq. (<ref>)]. To linear order in the electric field, we find the corrections to the polarization operators, [Π^R(q,ω)-Π^A(q,ω)]_s.j. =π i∑_nn'_k|⟨ u_nk| u_n'k+q⟩|^2eE·δr_n'k+q,nk∂/∂ϵ_nkδ(ϵ_n'k+q-ϵ_nk-ω) ×[F_n'(k+q,ϵ_n'k+q)-F_n(k,ϵ_nk)], [Π^K(q,ω)]_s.j. =-π i∑_nn'_k|⟨ u_nk| u_n'k+q⟩|^2eE·δr_n'k+q,nk∂/∂ϵ_nkδ(ϵ_n'k+q-ϵ_nk-ω) ×[F_n'(k+q,ϵ_n'k+q)F_n(k,ϵ_nk)-1]. Comparing to the leading parts of the polarizations given in Eqs. (<ref>) and (<ref>), we can interpret these terms as linear corrections from the energy conservation condition, replacing δ(ϵ_n'k+q-ϵ_nk-ω)→δ(ϵ_n'k+q-ϵ_nk-ω-eE·δr_n'k+q,nk) <cit.>. This can be understood as accounting for the work done by the electric field as the WSM electron obtains a coordinate shift due to the scattering. Substituting [Π^R,A,K]_s.j. in I^e-e (p,a) [Eq. (<ref>)] gives the side-jump correction to the interlayer collision integral, I_k^s.j. (p,a)[f^a,f^p] =-2π W∑_ξ=±1∑_nn'_q,k'[f_k^pf_n'k_1+q^a(1-f_k+q^p)(1-f_nk_1^a)-(1-f_k^p)(1-f_n'k_1+q^a)f_k+q^pf_nk_1^a] ×|U_RPA^R(q,ω)|^2eE·δr_n'k_1+q,nk_1∂/∂ϵ_nk_1^aδ(ϵ_n'k_1+q^a-ϵ_nk_1^a-ω)|⟨ u_nk_1| u_n'k_1+q⟩|^2, where we summed over the contributions from the two Weyl nodes ξ=±1 (the node index ξ is omitted from the functions in the integrand for brevity). This correction to the interlayer collision integral corresponds to the general form of the two-particle collision integral [Eq. (<ref>) in the main text] with a scattering rate proportional to the electric field, w_k,n'k_1+q→k+q,nk_1^s.j.=2π|U_RPA^R(q,ω)|^2[∂/∂ϵ_nk_1^aδ(ϵ_n'k_1+q^a+ϵ_k^p-ϵ_nk_1^a-ϵ_k+q^p)]eE·δr_n'k_1+q,nk_1|⟨ u_nk_1| u_n'k_1+q⟩|^2. Note that since ϵ_n'k_1+q^a-ϵ_nk_1^a≠ϵ_k+q^p-ϵ_k^p in the integrand of Eq. (<ref>), the side-jump collision integral is not nullified by the equilibrium distribution functions. We also note that w^s.j. is symmetric in the exchange of incoming and outgoing particles (k,n'k_1+q↔k+q,nk_1), since both the coordinate shift δr_n'k_1+q,nk_1 and the derivative of the delta function are odd under the exchange. 3. Interference with disorder. This term comes from taking one off-diagonal Green function involving disorder scattering, G_nn'^R,A,K→ G_V,nn'^R,A,K [Eqs. (<ref>), (<ref>)]. This is equivalent to dressing one bare propagator with two disorder scattering lines and taking the correction from the last term in the expression for F_nn' [Eq. (<ref>)], see Fig. <ref>. During the calculation, we omit terms that do not contribute to skew scattering and only lead to renormalization of the Born-approximation scattering rate. Utilizing the symmetry of the WSM for rotations in the x-y plane, we do so by keeping only the contributions that are anti-symmetric in reflections of the momentum q around the momentum arguments of the distribution functions F (projected on the x-y plane). For example, for a term of the form Π^R,A,K(q)∼_k_1F(k_1)H(k_1,q) where H is an arbitrary function, we calculate Π^R,A,K(q)=[_k_1F(k_1)H(k_1,q)-_k_1F(k_1)H(k_1,q^M(k_1,||))]/2, where q^M(k_1,||) is the reflection of q with respect to the vector k_1,∥ (k_1 projected on x-y plane)[Alternatively, this is equivalent to keeping only the imaginary part of the total product of the Bloch functions inner products [Eq. (<ref>)]. In our problem, this is the only object that is odd in angles on the x-y plane and thus, can result in skew scattering.]. The resulting corrections to the polarization matrices are [Π^R(q,ω)-Π^A(q,ω)]_V =-1/4γ∑_n,n',n_2∑_m≠ n'_k,q'dϵ/2π{Z_nk→ m,k+q→ n_2,k+q+q'→ n',k+q ×(F_n_2-F_n') (G_0,n^R-G_0,n^A)(G_0,n'^R-G_0,n'^A)(G_0,n_2^R-G_0,n_2^A)(G_0,m^R+G_0,m^A)-[q,ω→-q,-ω]}, [Π^K(q,ω)]_V =1/4γ∑_n,n',n_2∑_m≠ n'_k,q'dϵ/2π{Z_nk→ m,k+q→ n_2,k+q+q'→ n',k+q × F_n(F_n_2-F_n') (G_0,n^R-G_0,n^A)(G_0,n'^R-G_0,n'^A)(G_0,n_2^R-G_0,n_2^A)(G_0,m^R+G_m^A)+[q,ω→-q,-ω]}, with Z_n_1k_1→ n_2k_2→ n_3k_3→ n_4k_4 being the imaginary part of the Bloch phase acquired during hopping, Z_n_1k_1→ n_2k_2→ n_3k_3→ n_4k_4≡Im[⟨ u_n_1k_1| u_n_2k_2⟩⟨ u_n_2k_2| u_n_3k_3⟩⟨ u_n_3k_3| u_n_4k_4⟩⟨ u_n_4k_4| u_n_1k_1⟩]. For brevity, we have omitted the momentum and energy dependences of the Green functions, which can be read from the associated momentum to the Bloch wavefunctions, e.g., G_0,n'^R=G_0,n'^R(k+q,ϵ+ω) (functions related to the state n,k are evaluated on energy ϵ and all the rest are evaluated at energy ϵ+ω, since the disorder scattering does not transfer energy). In the 2-band model, we can simplify the Bloch phase appearing in Eqs. (<ref>), (<ref>) and write (utilizing m≠ n') Z_nk→ m,k+q→ n_2,k+q+q'→ n',k+q =-Im[⟨ u_nk| u_n_2,k+q+q'⟩⟨ u_n_2,k+q+q'| u_n',k+q⟩⟨ u_n',k+q| u_nk⟩]. ≡ -Z_nk→ n_2,k+q+q'→ n',k+q, The Bloch phase acquired from 3 hoppings is given by (measuring momentum relative to the Weyl node) <cit.> Z_n_1k_1→ n_2k_2→ n_3k_3=ξ/4n_1n_2n_3(k̂_1×k̂_2)·k̂_3, where we remind that we are treating each node with chirality ξ separately, assuming no internode scattering. We note that for a more general node Hamiltonian of the form H_ξ=ξh(k)·σ, one should replace k̂→ĥ in Eq. (<ref>). Substituting the corrections Π_V^R,A,K [Eqs. (<ref>), (<ref>)] into the interlayer collision integral [Eq. (<ref>)], we find the e-e-impurity skew-scattering part of the collision integral (summing over the two Weyl nodes) I_k^e-e-imp (p,a) =-W∑_ξ=±1∑_nn'_q,k_1,k_1' × [f_k^pf_nk_1^a(1-f_k+q^p)(1-f_n'k_1'^a)w_k,nk_1→k+q,n'k_1'^e-e-imp-(k,nk_1↔k+q,n'k_1')], with w_k,nk_1→k+q,n'k_1'^e-e-imp ≡4π^2γ|U_RPA^R(q,ω)|^2δ(ϵ_nk_1^a+ϵ_k^p-ϵ_n'k_1'^a-ϵ_k+q^p) ×∑_n_2_k_2{Z_nk_1→ n'k_1'→ n_2k_2/ϵ_nk_1^a-ϵ_n̅k_1^aδ(ϵ_nk_1^a-ϵ_n_2k_2^a)δ_k_1-q-k_1'-[nk_1↔ n_2k_2]} +[k,nk_1↔k+q,n'k_1'], where n̅≡-n. Note that w^e-e-imp includes parts where the total electron momentum is not conserved, but is rather gained or lost due to the impurity scattering. In the level of a linearized collision integral, some simplifications can be made due to the anti-symmetry in [nk_1↔ n_2k_2] and [n'k_1'↔ n_2k_2] in Eq. (<ref>). We find the linearized collision integral I_k^e-e-imp (p,a) =-W∑_ξ=±1_k',k_1,k_1'f_0(ϵ_k^p)f_0(ϵ_nk_1^a)(1-f_0(ϵ_k'^p))(1-f_0(ϵ_n'k_1'^a)) × [g_nk_1^a W_k,nk_1→k',n'k_1'-g_n'k_1'^a W_k',n'k_1'→k,nk_1], with W_k,nk_1→k+q,n'k_1' = W_k,nk_1→k+q,n'k_1'^(1)+ W_k,nk_1→k+q,n'k_1'^(2), W_k,nk_1→k+q,n'k_1'^(1) =π^2γ/2|U_RPA^R(q,ω)|^2δ(ϵ_k^p+ϵ_nk_1^a-ϵ_k+q^p-ϵ_n'k_1'^a)δ_k_1-k_1'-qν(ϵ_nk_1^a )/ϵ_nk_1^a (k̂_1×k̂_1')·M(ϵ_nk_1^a ), W_k,nk_1→k+q,n'k_1'^(2) =π^2γ/2|U_RPA^R(q,ω)|^2δ(ϵ_k^p+ϵ_nk_1^a-ϵ_k+q^p-ϵ_n'k_1'^a)δ(ϵ_n_2k^a-ϵ_nk_1^a )qξ/v_F(k_1')^2(k̂_1×k̂_1')·q̂, where we assumed k_1',k_1≫ q (as is the case for Coulomb drag in the regime k_F≫1/d) and defined the average spinor on the Fermi surface of a single node M(ϵ)≡1/ν(ϵ)∑_n_kδ(ϵ_nk^a-ϵ)nξk̂. Let us comment on a peculiarity regarding the symmetry of the e-e-impurity scattering rate under the exchange of incoming and outgoing particles, k,nk_1↔k',n'k_1'. The full rate w^e-e-imp is symmetric under the exchange, as is explicitly seen from the exchanged term in Eq. (<ref>). However, in the linearized response level, the term W_k,nk_1→k',n'k_1' [Eq. (<ref>)] contains a significant anti-symmetric part. This apparent contradiction is resolved by the fact that w^e-e-imp contains non-momentum conserving terms [e.g., the second term in the curly brackets in Eq. (<ref>) where nk_1↔ n_2k_2, exchanging between the incoming and intermediate electrons]. In the linearized collision integral [Eq. (<ref>)], we dropped terms that cancel under the integration over the angle of the outgoing electron, e.g., under _k_1'δ(ϵ_k_1'-ϵ)g_k_1w_k,nk_1→k+q,n'k_1'^e-e-imp for a fixed ϵ. For a typical momentum-conserving integral, such integration only picks the momentum delta function and cannot alter the symmetry of w. However, in our case, this integration cancels a pair of terms anti-symmetric in n'k_1'↔ n_2k_2, resulting in a non-symmetric scattering rate W_k,nk_1→k+q,n'k_1'. § DRAG FORCE FROM THE E-E COLLISION INTEGRAL Here, we compute the drag force (momentum transfer rate) between the layers for boosted distribution functions [Eqs. (<ref>), (<ref>) and (<ref>) of the main text] and an interlayer scattering integral given by Eq. (<ref>) with the scattering rates of Eq. (<ref>). We show that the drag force can be written in the form of Eqs. (<ref>), (<ref>) in the main text, and calculate explicitly the drag coefficients η_∥,H^D. We separate the calculation for each term in the scattering rate. §.§ Born-approximation part of the scattering rate The Born-approximation part of w^e-e is symmetric and conserves momentum, w_k,nk_1+q→k+q,n'k_1^Born=w_k+q,n'k_1→k,nk_1+q^Born. We substitute w^e-e→ w^Born in Eq. (<ref>), linearize the collision integral with respect to the non-equilibrium part of the distribution functions, utilize the energy and momentum conservation of the collision integral, and arrive at I_k^Born (p,a)[f^p,f^a] =-W/4∑_ξ=±1∑_nn'_q,k_1w_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)) × (g_k^p+g_nk_1+q^a-g_k+q^p-g_n'k_1^a), where ω≡ϵ_k+q^p-ϵ_k^p=ϵ_nk_1+q^a-ϵ_n'k_1^a. The drag force between the layers is obtained by multiplying Eq. (<ref>) by k and integrating over k, F^p,a (Born) ≡_kk I_k^Born (p,a)=-W/4∑_ξ=±1∑_nn'_k,q,k_1kw_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T ×(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a))(g_k^p+g_nk_1+q^a-g_k+q^p-g_n'k_1^a). The expression above may be simplified by adding the opposite scattering process to the integrand. Concretely, writing the integral as _q,k,k'h(k,k',q), we rename the integration variables k,k',q→k+q,k'+q,-q and rewrite the integral as _q,k,k'h(k,k',q)=1/2_q,k,k'(h(k,k',q)+h(k+q,k'+q,-q)). Doing this for the integral in Eq. (<ref>) leads to F^p,a (Born) =W/8T∑_ξ=±1∑_nn'_q,k,k_1qw_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)) ×[g_k^p+g_nk_1+q^a-g_k+q^p-g_n'k_1^a]. We now use Eq. (<ref>) to calculate the drag force for the case where the distribution functions are boosted velocity distributions, and treat the more general case of energy-dependent boost velocities later. Simple case: boosted velocity distributions For boosted velocity distributions, g_nk^l=k·u^l/T. Substituting into Eq. (<ref>) leads to F^p,a (Born) =W/8T∑_ξ=±1∑_nn'_q,k,k_1q[q·(u^a-u^p)] ×w_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)). For any isotropic system, this results in the drag force F^p,a (Born)=η_∥^D/d(u^a-u^p), with the drag coefficient η_∥^D given by η_∥^D=Wd/16T∑_ξ=±1∑_nn'_q,k,k_1q^2/sinh^2ω/2Tw_k,nk_1+q→k+q,n'k_1^Born(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)). Substituting the specific form of w^Born, Eq. (<ref>), we obtain η_∥^D=Wd/8π T_-∞^∞dω1/sinh^2ω/2T_qq^2|U_RPA^R(q,ω)|^2ImΠ_0^p,R(q,ω)ImΠ_0^a,R(q,ω), where Π_0^l,R(q,ω) are the bare polarization operators of the layers, with their imaginary parts given by ImΠ_0^p,R(q,ω) =π_kδ(ϵ_k+q^p-ϵ_k^p -ω)(f_0(ϵ_k+q^p)-f_0(ϵ_k^p )), ImΠ_0^a,R(q,ω) =π∑_ξ=±1∑_nn'_kδ(ϵ_nk+q^a -ϵ_n'k^a -ω)(f_0(ϵ_nk+q^a )-f_0(ϵ_n'k^a ))|⟨ u_n'k| u_nk+q⟩|^2. Evaluating Eq. (<ref>) with the approximate Coulomb interaction [Eq. (<ref>)] in the limits T≪ T_d and T≫ T_d leads to Eqs. (<ref>) and (<ref>) of the main text. To briefly explain the calculation, in the limit T≪ T_d, the frequency integral in Eq. (<ref>) is dominated by ω∼ T and the result breaks into the product of the independent ω and q integrals. In the limit T≫ T_d, the frequency integration is cut off by the boundary of the particle-hole spectrum in the layers, ω<min(v_F^a,v_F^p)q, and the ω and q integrals do not factorize <cit.>. The thermal factor can be approximated by sinh(ω/2T)≈ω/(2T) <cit.>. Note that the frequency dependence of the interlayer scattering propagator U_RPA^R(q,ω) is important at high temperatures (see Sec. <ref> in this Appendix for more details about the calculation). General case: energy-dependent boost velocities We now consider the more general case, parametrizing the non-equilibrium distribution functions with an energy-dependent boost velocity g_nk^a=k·u^a(ϵ_nk^a )/T, as in Sec. <ref> of the main text. For simplicity, we take u^p=0. Substituting this form of g_nk^a in Eq. (<ref>) yields F^p,a (Born) =W/8T∑_ξ=±1∑_nn'_q,k,k_1qw_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)) ×[(k_1+q)·u^a(ϵ_n'k_1^a+ω)-k_1·u^a(ϵ_n'k_1^a)]. Performing a Taylor expansion for u^a(ϵ) up to the first-derivative term, we find F^p,a (Born) =W/8T∑_ξ=±1∑_nn'_q,k_1qw_k,nk_1+q→k+q,n'k_1^Born/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)) [q·u^a(ϵ_n'k_1^a)+k_1·ω∂u^a(ϵ_n'k_1^a)/∂ϵ_n'k_1^a]≡F^p,a[u^a]+F^p,a[∂u^a/∂ϵ]. In the limit where ϵ_F^a≫ T,T_d, one may substitute u^a(ϵ_k'^a )≈u^a(ϵ_F^a) in the first term, returning to the case of the last section. The second term in Eq. (<ref>) arises from the energy dependence of the boost velocity. We write it as F_α^p,a[∂u^a/∂ϵ]≡η_∥(1)^Dϵ_F^a/d.∂ u_α^a/∂ϵ|_ϵ=ϵ_F^a. This part of the force corresponds to the third term in Eq. (<ref>) of the main text, with the drag coefficient η_∥(1)^D=Wd/8π T∫ dω1/sinh^2ω/2T_qq^2(ω/v_F^aq)^2|U_RPA^R(q,ω)|^2ImΠ_0^p,R(q,ω)ImΠ_0^a,R(q,ω). For an e-e scattering where the WSM electron scatters from (momentum, energy) (k,ϵ_nk^a)→(k+q,ϵ_n'k+q^a=ϵ_nk^a+ω), the factor ω/v_F^aq is equal to the cosine of the angle between v_nk^a and q (for k∼ k_F≫ q∼1/d). Thus, this factor approaches one for forward scattering (i.e., for q∥v_nk^a) and zero for perpendicular scattering (q⊥v_nk^a). For low temperatures (T≪ T_d), perpendicular scattering is dominant (ω/v_F^aq∼ Td/v_F^a≪1), and the resulting contribution to the drag from η_∥(1)^D is subleading in (T/T_d)^2 compared to the η_∥(0)^D term [Eq. (<ref>)]. In the opposite limit where T≫ T_d, the two terms are comparable. We evaluate the integral with the approximated Coulomb interaction [Eq. (<ref>)] to obtain the value of η_∥(1)^D presented in Eq. (<ref>) of the main text. Note that in the case v_F^a≫ v_F^p, interlayer collisions with forward scattering in the WSM are not possible, and the coefficient η_∥(1)^D becomes parametrically small, as can be seen from Eq. (<ref>). §.§ Skew scattering Next, we calculate the drag force from the skew-scattering parts of the e-e collision integral, corresponding to the e-e-impurity interference and side-jump modified e-e collision integrals. e-e-impurity scattering The linearized form of the e-e-impurity part of the collision integral is given in Eq. (<ref>). We calculate the contributions from the two terms in the scattering rate W= W^(1)+ W^(2) [Eq. (<ref>)] separately, writing I_k^e-e-imp (p,a)= I_k^e-e-imp (1)+ I_k^e-e-imp (2). The term W^(1) is antisymmetric in incoming and outgoing particles, and the corresponding term in the collision integral is given by I_k^e-e-imp (1) =-W/4∑_ξ=±1∑_nn'_q,k_1 W_k,nk_1+q→k+q,n'k_1^(1)/sinh^2ω/2T(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a)) ×(g_nk_1+q^a+g_n'k_1^a), where we utilized the momentum conservation of W^(1) to eliminate one momentum integration. In the limit k_F^a≫1/d, the terms corresponding to g_nk_1+q^a and g_n'k_1^a in Eq. (<ref>) give equal contributions, and we can simplify, substituting W^(1) from Eq. (<ref>), I_k^e-e-imp (1) =-W/8T_-∞^∞dω1/sinh^2ω/2T_q(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))δ(ϵ_k+q^p-ϵ_k^p-ω) ×|U_RPA^R(q,ω)|^2[1-(ω/v_F^aq)^2]ImΠ_0^a,R(q,ω)C/ϵ_F^aτ^aϵ_αβq_αu_β^a. Calculating the corresponding drag force in the same manner as in the previous subsection, we find F_α^e-e-imp (1) ≡_kk_α I_k^e-e-imp (1) =-W/32π TC/ϵ_F^aτ^a_-∞^∞dω1/sinh^2ω/2T_qq^2|U_RPA^R(q,ω)|^2[1-(ω/v_F^aq)^2]ImΠ_0^p,R(q,ω)ImΠ_0^a,R(q,ω)ϵ_αβu_β^a. This corresponds to a Hall-like drag force, F_α^e-e-imp (1)∼ϵ_αβη_H(e-e-imp,1)^Du_β^a/d [second term in Eq. (<ref>)], with η_H(e-e-imp,1)^D=-C/4ϵ_F^aτ^aη_D^∥Q_3(v_F^p/v_F^a), and Q_3(z) given in Eq. (<ref>). The calculation for the contribution from I_k^e-e-imp (2) is similar, and leads to η_H(e-e-imp,2)^D=η_H(e-e-imp,1)^D/3. In total, e-e-impurity scattering generates the drag force F_α^p,a (e-e-imp)=η_H(e-e-imp)^Dϵ_αβu_β^a/d, with η_H(e-e-imp)^D=4/3η_H(e-e-imp,1)^D. Side-jump collision integral Next, we calculate the contribution from the side-jump correction to the e-e collision integral, Eq. (<ref>). Since the electric field is explicit in the scattering rate, in the linear response level we substitute the equilibrium value of the distribution functions, obtaining I_k^s.j. (p,a) =π W/2T∑_ξ=±1∑_nn'_k_1,q1/sinh^2(ω/2T)|U_RPA^R(q,ω)|^2δ(ϵ_k^p+ϵ_nk_1+q^a-ϵ_k+q^p-ϵ_n'k_1^a) ×(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))(f_0(ϵ_nk_1+q^a)-f_0(ϵ_n'k_1^a))|⟨ nk_1+q| n'k_1⟩|^2δr_nk_1+q,n'k_1,· eE. Substituting the value of the coordinate shift for the Weyl electrons in a node of chirality ξ <cit.> δr_nk,n'k'=ξk̂×k̂'/4|⟨k|k'⟩|^2(n'/k+n/k'), for same bands, approx the above expression by ≃ξ nq×k/2k^3, we get I_k^s.j. (p,a) =-WC/8T dω1/4sinh^2ω/2T_q|U_RPA^R(q,ω)|^2(f_0(ϵ_k+q^p)-f_0(ϵ_k^p))δ(ϵ_k+q^p-ϵ_k^p-ω) ×ImΠ_0^a,R(q,ω)[1-(ω/v_F^aq)^2](v_F^a/ϵ_F^a)^2ϵ_αβq_αeE_β. The resulting drag force is given by F_α^p,a (s.j.) =_kk_α I_k^s.j. (p,a) =-WC/32π T dω1/sinh^2ω/2T_q|U_RPA^R(q,ω)|^2[1-(ω/v_F^aq)^2]ImΠ_0^p,R(q,ω)ImΠ_0^a,R(q,ω) ×(v_F^a/ϵ_F^a)^2q^2ϵ_αβeE_β. The resulting force is perpendicular to the electric field in the active layer. Since F^p,a (s.j.) is already subleading in (1/ϵ_F^aτ^a) compared to the leading part of F^p,a, we can approximate eE^a≃u^aϵ_F^a/((v_F^a)^2τ^a,∥)=2u^aϵ_F^a/(3(v_F^a)^2τ^a) and write Eq. (<ref>) as F_α^p,a (s.j.) =-WC/48π T dω1/sinh^2ω/2T_q|U_RPA^R(q,ω)|^2[1-(ω/v_F^aq)^2]ImΠ_0^p,R(q,ω)ImΠ_0^a,R(q,ω) ×1/ϵ_F^aτ^aq^2ϵ_αβu_β^a≡η_H(s.j.)^Dϵ_αβu_β^a/d, with η_H(s.j.)^D=η_H(e-e-imp)^D/2. Summing the contributions from the e-e-impurity scattering and e-e-side-jump scattering, F_α^p,a (H)≡ F_α^p,a (e-e-imp)+F_α^p,a (s.j.)≡η_H^Dϵ_αβu_β^a/d, we find the total result for the Hall component of drag response η_H^D=η_H(e-e-imp)^D+η_H(s.j.)^D given in the main text, Eqs. (<ref>) and (<ref>). §.§ Frequency integrals at high temperatures In the high-temperature limit T≫ T_d, the calculations of the drag coefficients involve cumbersome integrals due to the frequency dependence of the interlayer Coulomb interaction [see Eqs. (<ref>), (<ref>), (<ref>) and (<ref>)]. In the main text, we write the results for the drag coefficients [Eqs. (<ref>)-(<ref>)] by denoting these integrals with the functions Q_1(z), Q_2(z) and Q_3(z), with z≡ v_F^p/v_F^a. Let us write the rightmost fraction in the RHS of Eq. (<ref>) as Y(ω̃,z)≡1-(ω̃/z)^2/(1+ω̃/2log(1-ω̃/1+ω̃))^2+π^2/4ω̃^2, where ω̃≡ω/v_F^aq is a rescaled frequency. The functions Q_1,2,3(z) denote the integrals over ω̃ in the calculations of the drag coefficients, and are given by Q_1(z) ≡1/min(1,z)_0^min(1,z)dω̃Y(ω̃,z)/√(1-(ω̃/z)^2), Q_2(z) ≡1/min(1,z^3)_0^min(1,z)dω̃ω̃^2Y(ω̃,z)/√(1-(ω̃/z)^2), Q_3(z) ≡ Q_1(z)-min(1,z^2)Q_2(z). In the limit z→0, Q_1(z)=3π/16,Q_2(z)=π/32, and in the limit z→∞, Q_1(z)≈0.800, Q_2≈0.205. We evaluate the integrals numerically and plot the functions in Fig. <ref>. § NON-INTERACTING AHE IN A WSM Here, we briefly summarize the calculation of the AHE conductivity in the model of a TRS-breaking tilted WSM [Eq. (<ref>) of the main text], following Refs. <cit.>. For each Weyl node of chirality ξ described by the Hamiltonian H_ξ=v_F(ξσ·k+C_ξk_z), the energies are given by ϵ_nk=v_F(nk+C_ξk_z), where n=±1 denotes the upper and lower bands, and the momentum is measured relative to the center of the Weyl node. It is convenient to define the product of the node chirality and the band index, ζ≡ξ n. In this notation, the eigenstates of a Weyl node are written in the spinor basis as |u_ζ=1,k⟩ =[ cosθ/2; sinθ/2e^iφ ], |u_ζ=-1,k⟩ =[ -sinθ/2; cosθ/2e^iφ ], where k=k(sinθcosφ,sinθsinφ,cosθ). We now turn to the calculation of the AHE conductivity in a single Weyl node, multiplying the result by the number of nodes in the final step. The corrected Boltzmann equation for the WSM in a small electric field and in the steady state reads <cit.> eE·v_s∂ f_0/∂ϵ_s=-_s'w_s,s'(f_s-f_s')+_s'w_s,s'eE·δr_s,s'(-∂ f_0(ϵ_s')/∂ϵ_s'), where s=(k,n) denotes the combined (momentum, band) state index, _s≡∑_n dk/(2π)^3, and w_s,s' is the scattering rate due to disorder. The second term in the RHS represents the side-jump correction to the collision integral due to the coordinate shift δr_s,s' that an electron obtains when scattering from s'→ s [Eq. (<ref>)]. The disorder scattering rate is given by w_s,s' ≡ w_s,s'^Born+w_s,s'^skew, w_s,s'^Born =2πγδ(ϵ_s-ϵ_s')|⟨ u_s| u_s'⟩|^2, w_s,s'^skew =4π^2γ^2ν(ϵ_s)/3ϵ_sδ(ϵ_s-ϵ_s')sinθ_ksinθ_k'sin(φ_k'-φ_k). Here, ν(ϵ) is the density of states of a single Weyl node, given by ν(ϵ)≡_s'δ(ϵ-ϵ_s)=ϵ^2/2π^2v_F^3(1-C^2)^2. To solve Eq. (<ref>), it is convenient to solve the side-jump collision integral separately by writing δ f_s≡δ f_s^n+δ f_s^anomal. and solving the two equations eE·v_s∂ f_0/∂ϵ_s =-_s'w_s,s'(δ f_s^n-δ f_s'^n), 0 =-_s'w_s,s'(δ f_s^anomal.-δ f_s'^anomal.)+_s'w_s,s'eE·δr_s,s'(-∂ f_0(ϵ_s')/∂ϵ_s'). Let us define the elastic, the transport, and the skew-scattering times 1/τ_s ≡_s'w_s,s'=πγν(ϵ_s), 1/τ_s,∥≡ _s'w_s,s'(1-sinθ_k'/sinθ_kcosθ_k,k')=2/31/τ_s+O(C^2), 1/τ_s,⊥(skew) ≡_s'w_s,s'sinθ_k'/sinθ_ksin(φ_k'-φ_k)=ξ2C_ξ/3ϵ_sτ_s1/τ_s,∥+O(C^3). In the limit ϵ_Fτ≫1, the skew-scattering time is much longer than the parallel one (τ_⊥(skew)≫τ_∥), and the solutions to Eqs. (<ref>), (<ref>) are given by δ f_s^n =-∂ f_0/∂ϵ_sv_s·(eE+τ_s,∥/τ_s,⊥(skew)eE×ẑ)τ_s,∥, δ f_s^anomal. =τ_s,∥_s'w_s,s'eE·δr_s,s'(-∂ f_0(ϵ_s')/∂ϵ_s')=-∂ f_0/∂ϵ_sv_s·(eE×ẑ)(ξ5C_ξ/6ϵ_sτ_s+O(C^3))τ_s,∥. Since the anomalous distribution δ f_s^anomal. is of the same form as the second term in Eq. (<ref>), we write the entire non-equilibrium distribution function δ f_s in the form of Eq. (<ref>) [Eq. (<ref>) in the main text], absorbing the anomalous distribution into the definition of the perpendicular transport time 1/τ_s,⊥≡1/τ_s,⊥(skew)+ξ5C_ξ/6ϵ_sτ_s1/τ_s,∥=ξ3C_ξ/2ϵ_sτ_s1/τ_s,∥. The scattering times τ_∥,τ_⊥ [Eqs. (<ref>), (<ref>)] are those used for writing the distribution function of the WSM layer in the main text [Eq. (<ref>)]. The velocity operator of the WSM electrons is composed of regular and anomalous parts, dr/dt=v_s+v_s^int.+v_s^ext., where v_s=∂ϵ_s/∂k is the regular part corresponding to the band group velocity, and the internal and external velocities are given by <cit.> v_s^int. =eE×Ω_s, v_s^ext. =_s'w_s',sδr_s',s. We note that the intrinsic velocity gives rise to anomalous Hall current from the filled bands, which cannot be calculated from the low-energy Hamiltonian [Eq. (<ref>)] <cit.>. One may calculate this Fermi-sea contribution by regularizing the Hamiltonian (e.g., modifying the σ_z term in the Hamiltonian to be v_F(√(k_x^2+k_y^2+k_z^2)-k_0^2)σ_z, putting two Weyl nodes of opposite chirality at k=± k_0k̂_z <cit.>) or by imposing boundary conditions. When the chemical potential is at the neutrality point (ϵ_F=0), integrating the intrinsic current over the filled lower band reproduces the known result for the AHE conductivity for a pair of Weyl nodes, σ_xy^int.(ϵ_F=0)=e^2Δ_k/(4π^2) <cit.>. For non-zero ϵ_F, we compute the intrinsic contribution by σ_xy^int.(ϵ_F) = σ_xy^int.(ϵ_F=0) +22e/E_x_s(f_0(ϵ_s,μ=ϵ_F)-f_0(ϵ_s,μ=0))v_s,y^int., where the factor 2 in the second term accounts for the two Weyl nodes. Calculating the contributions to the AHE conductivity from each part of the velocity operator [Eq. (<ref>)], we find (multiplying the Fermi-surface contributions by 2 to account for the two nodes) σ_xy^reg. =e^23ϵ_FC/4π^2v_F, σ_xy^int. =e^2[Δ_k/4π^2-ϵ_FC/6π^2v_F], σ_xy^ext. velocity =e^25ϵ_FC/12π^2v_F, where σ_xy^reg.,σ_xy^int.,σ_xy^ext. velocity correspond to v_s,v_s^int.,v_s^ext., respectively[We note that the sign of the second term in Eq. (<ref>) appears to disagree with Ref. <cit.> but to agree with Ref. <cit.>.]. In the main text, we combine the anomalous contributions to one term, σ_xy^a,int.+ext.vel.≡σ_xy^int.+σ_xy^ext. velocity. Note that the intrinsic Hall conductivity is the only non-vanishing term when the Fermi energy is set in the neutrality point (ϵ_F=0). In the notations of Refs. <cit.>, our expression for σ_xy^reg. is equivalent to σ_xy^skew +σ_xy^s.j./2, and σ_xy^ext. velocity is equivalent to σ_xy^s.j. /2. CONTINUE THIS LINE : *** σ_xy^int=e^2 dk(Ω_k)_z=v_FΔ_k/4π^2 *** old numbers above for σ_xy^int, now disagrees with Chinese and Steiner-Pesin-Andreev :( τ_⊥^-1(k)=(2/3+5/6)C/ϵ_kτ_el(k)τ_tr^-1(k)=3/2C/ϵ_kτ_el(k)τ_tr^-1(k) σ_xx=v_F^2/3ν(ϵ_F)τ^∥=ϵ_F^2τ^el/4π^2v_F
http://arxiv.org/abs/2407.02873v1
20240703073826
Robot Shape and Location Retention in Video Generation Using Diffusion Models
[ "Peng Wang", "Zhihao Guo", "Abdul Latheef Sait", "Minh Huy Pham" ]
cs.RO
[ "cs.RO" ]
Explicitly Guided Information Interaction Network for Cross-modal Point Cloud Completion Hang Xu1* Chen Long1*Wenxiao Zhang2† Yuan Liu3 Zhen Cao1 Zhen Dong1 Bisheng Yang1 July 8, 2024 ======================================================================================== empty empty § ABSTRACT Diffusion models have marked a significant milestone in the enhancement of image and video generation technologies. However, generating videos that precisely retain the shape and location of moving objects such as robots remains a challenge. This paper presents diffusion models specifically tailored to generate videos that accurately maintain the shape and location of mobile robots. This development offers substantial benefits to those working on detecting dangerous interactions between humans and robots by facilitating the creation of training data for collision detection models, circumventing the need for collecting data from the real world, which often involves legal and ethical issues. Our models incorporate techniques such as embedding accessible robot pose information and applying semantic mask regulation within the ConvNext backbone network. These techniques are designed to refine intermediate outputs, therefore improving the retention performance of shape and location. Through extensive experimentation, our models have demonstrated notable improvements in maintaining the shape and location of different robots, as well as enhancing overall video generation quality, compared to the benchmark diffusion model. Codes will be opensourced at https://github.com/PengPaulWang/diffusion-robotsGithub. § INTRODUCTION Diffusion models have achieved remarkable advancements in recent years, and have achieved better or on-the-par performance with generative adversarial networks in image and video generation <cit.>. Compared to image generation, video generation remains a challenge in terms of model complexity, dependence on data and computational resources, consistency of generated videos, generation efficiency, and shape and location retention of dynamic objects in generated videos <cit.>. Despite all the challenges, the potential of diffusion models to generate dynamic and appealing content has driven the research and application forward, and they have been applied in generating high-quality videos <cit.>, carrying out video prediction and infilling <cit.>, control movements in the generated video <cit.>, and directly process and manipulate a real input video <cit.>. Another promising application of diffusion models is that they can be used to generate data for dangerous interaction detection in cases such as human-robot collaboration, where collecting real data for model training faces legal and ethical challenges. The foundational technology behind many of the applications mentioned is the Denoising Diffusion Probabilistic Model (DDPM), which is trained to understand Gaussian noise patterns added to input images throughout the training process. Once sufficiently trained, the DDPM can start with noisy images or images that consist purely of Gaussian noise and, through iterative denoising, produce outputs that adhere to a specific empirical distribution <cit.>. The evaluation of diffusion models' performance often relies on metrics like the Peak Signal-to-Noise Ratio (PSNR), which measures the overall quality of frames or videos by computing pixel-to-pixel differences between the generated frames and the reference frames if any. However, relying solely on PSNR may overlook structural information loss, such as local distortions of the shape of objects of interest, providing a misleadingly positive assessment of overall performance. For instance, Figure <ref> shows one original frame (left) with a robot, and two frames generated by diffusion models (middle and right). We can see that the generated frame on the right has a broken arm (lost retention of the shape), while the generated frame in the middle maintains the shape of the arm. Despite the failed arm shape retention, the two generated frames have similar PSNR values as the distorted arm does not contribute enough to make a distinctive difference in PSNR values. This oversight is particularly critical in scenarios where an object's shape and location are crucial in generated frames. For instance, in human-robot collaborative tasks, there is the need to forecast potential collisions between humans and (dynamic) robots, and the collection of such data for collision model training in real life often faces ethical and legal challenges. Therefore, using diffusion models to generate data with shape and location retention becomes a promising solution. In light of these observations, the Structural Similarity Index (SSIM) emerges as an alternative metric for evaluating diffusion models. Unlike PSNR, SSIM is adept at capturing structural similarities and differences, making it a more reliable indicator of a model's ability to preserve object shapes and locations. This paper aims at developing diffusion models that can generate frames where the shape and location of objects of interest can be retained. Particularly, we are interested in generating videos that contain moving robots, whose shape and location retention are vital in the generated frames. As mentioned earlier, this will for example help to generate data for human-robot collision detection tasks and bypass legal and ethical challenges. Two types of robots are used in different scenarios, i.e., a Waffle Pi mobile robot with a gripper mounted on top and a collaborative robot, a.k.a., cobot. The proposed diffusion models take the ConvNext <cit.> as the backbone network, to accelerate the training and sampling efficiency <cit.>. To retain the shape and location of the robots, we have embedded the robot pose information such as location, orientation, and velocities into ConvNext blocks and used semantic masks (either the masks of the robots or the masks of the robots and the backgrounds) to regulate the intermediate outputs of ConvNext blocks. Various experiments have been conducted to investigate how pose embedding and mask regulation affect the performance of the models in shape and location retention. The contributions of this work include 1) the development of diffusion models capable of preserving the shape and location of robots within generated frames. This advancement shows promise for generating data to train models aimed at detecting collisions between humans and robots in the future. 2) the introduction of a novel Spatially-Adaptive Normalization (SPADE) module for integrating semantic masks, and the implementation of an embedding procedure that incorporates robot pose information from controllers like the Robot Operating System (ROS) into the backbone network, which strikes a balance between the quality of generation and the preservation of shape and location information. 3) Introduction of a refined Intersection over Union (IoU) metric and the Hu moments match for evaluating the retention of location and shape. The remainder of the paper is organised as follows: Section II presents some related works, Section III elaborates on the approach, Section IV covers experiments, discussions, and an ablation study, and finally, Section V concludes the paper. § RELATED WORKS Most video generation models based on DDPMs share the same underlying core backbone UNet <cit.> for the denoising process. They substantially differentiate from each other in terms of conditions for generating new frames. There are mainly three types of conditions, i.e., (i) embedded context information: For example, Yang et al. <cit.> propose residual video diffusion, which utilises a context vector generated by a convolutional recurrent neural network as conditions to generate the next frame. embedded context information, e.g., Yang et al <cit.> propose (ii) semantic masks, e.g., Wang et al <cit.> propose the semantic diffusion model where semantic masks are employed to condition new frames, resulting in improved quality in generating small objects; and (iii) video frames as conditions, e.g., Vikram et al <cit.> propose masked conditional video diffusion, which involves masking frames from the past or future. The model is trained on unmasked frames and generates the masked frames based on the chosen masking strategy. Yaniv et al. <cit.> have recently introduced SinFusion, a video generation diffusion model utilizing ConvNext <cit.> as the backbone. This model can produce images or videos based on a single input image or video. The novel architecture proves particularly advantageous in training DDPMs on a single image or its large crops, circumventing the `overfitting' issues associated with UNet. This is achieved by restricting the receptive field of UNet to non-global areas and reducing computational time compared to standard DDPMs <cit.>. Such improvements are especially beneficial for real-life applications like human-robot collaboration <cit.>. While SinFusion has demonstrated comparable or even superior results compared to other video generation models, the authors note potential drawbacks, such as the possibility of breaking dynamic objects in the generated results. An example of this issue is illustrated in Figure <ref> (Right). Inspired by the simple structure and efficiency of SinFusion, and the embedding techniques from other works <cit.> in improving the performance of diffusion models, we have adopted ConvNext as our backbone and introduced robot pose embedding and semantic mask regulation to help retain the shape and location of dynamic object (robots) generation, a step towards applying diffusion models to generate data for dangerous human-robot interaction detection model training. § DIFFUSION MODELS The theory and fundamental principles of diffusion models were introduced by Sohl-Dickstein et al.<cit.> and further elaborated upon in subsequent studies like those by Ho et al.<cit.>, as well as other works such as that by Hoppe et al.<cit.>. In essence, diffusion models utilise a deep neural network ℳ, such as UNet<cit.>, as their backbone network. This network is trained on noisy data, such as images and video frames, to enable the trained model to accurately identify and model the noise present in the input data. The training of diffusion models comprises two primary stages: the forward diffusion process (forward process) and the reverse diffusion process (reverse process). In the forward process, data such as images and videos serve as inputs, and the structure of the data distribution is disrupted by introducing noise. This facilitates the training of model ℳ to recognize and model the noise imposed on the data. The reverse diffusion process, known as the reverse process, aims to reconstruct the data structure from noisy data or the noise itself. In this paper, we will first review these two stages of diffusion models in the context of video generation, followed by our proposed works. §.§ The Forward Diffusion Process In the context of image/video generation, given an input frame 𝐱_0 sampled from a distribution q(𝐱_0), one can iteratively add Gaussian noise Σ_t ∼𝒩(Σ_t;0,𝐈), t=1,⋯, T to 𝐱_0 for T steps. This process generates a sequence of noisy samples {𝐱_1, ⋯, 𝐱_T}. The variance of the noise added at each step can be controlled using a variance scheduler {β_t ∈ (0,1)}_t=1^T. The forward diffusion process is normally formulated as a Markov chain: q(𝐱_1:T|𝐱_0):=∏_t=1^T q(𝐱_t|𝐱_t-1), where q(𝐱_t |𝐱_t-1) := 𝒩(𝐱_t; √(1 - β_t)𝐱_t-1, β_t𝐈), which indicates the dependency of 𝐱_t on 𝐱_t-1. This also implies that to get a noisy sample at 𝐱_t, one needs to add noises from 𝐱_0 up to 𝐱_t-1 step by step, which could be time and computational resources demanding. Fortunately, this can be simplified as shown in <cit.>, i.e., the forward process admits sampling 𝐱_t at an arbitrary timestep t in closed form. This is achieved by letting α_t = 1 - β_t and α̅_t = ∏_i=1^t α_i, one then gets q(𝐱_t |𝐱_0) := 𝒩(𝐱_t; √(α̅_t)𝐱_0, (1 - α̅_t)𝐈), which indicates that 𝐱_t can be sampled from 𝐱_0 in one step as in 𝐱_t = √(α̅_t)𝐱_0 + √(1 - α̅_t)Σ, where Σ∼𝒩(0,𝐈) is the noise used to generate the noisy frame 𝐱_t. §.§ The Reverse Diffusion Process The reverse diffusion process involves starting with a Gaussian noise 𝐱_T∼𝒩(0,𝐈) and then reversing the transition outlined in Equation (<ref>). This reversal allows for sampling from the posterior of the forward process q(𝐱_t-1|𝐱_t), with t = T, ⋯, 1, in order to recover 𝐱_0 (it's worth noting that the process can terminate at any intermediate stage). However, reversing Equation (<ref>) presents a challenge, and it is typically approximated using a trainable Markov chain depicted in Equation (<ref>), which begins with a Gaussian noise p(𝐱_T)=𝒩(𝐱_T;0,𝐈): p_θ(𝐱_0:T):=p(𝐱_T)∏_t=1^T p_θ(𝐱_t-1|𝐱_t), where p_θ(𝐱_t-1|𝐱_t):=𝒩(𝐱_t-1;μ_θ(𝐱_t, t), Σ_θ(𝐱_t,t)). One can see that if p_θ(𝐱_0:T) can be learned by ℳ, then the reverse process simplifies to p_θ(𝐱_0):=∫ p_θ(𝐱_0:T)d𝐱_1:T, where 𝐱_1:T are latent variables of the same dimensions with 𝐱_0. The approximation of q(𝐱_1:T|𝐱_0) using p_θ(𝐱_0:T) is achieved by optimising the variational bound on negative log-likelihood between them <cit.>: 𝔼[-log p_θ(𝐱_0)]≤𝔼_q[-logp_θ(𝐱_0:T)/q(𝐱_1:T|𝐱_0)]:=L, which can be rewritten into Equation (<ref>) according to <cit.>: L := 𝔼_q [D_KL(q(𝐱_T |𝐱_0) ∥ p_θ(𝐱_T))_L_T + ∑_t=2^T D_KL(q(𝐱_t-1|𝐱_t, 𝐱_0) ∥ p_θ(𝐱_t-1|𝐱_t))_L_t-1 - log p_θ(𝐱_0 |𝐱_1)_L_0], where D_KL represents the KL divergence. One can see that each term in Equation (<ref>) is a direct measure of the similarity in terms of KL divergence between p_θ(𝐱_t-1|𝐱_t) and the reversed forward transitions but conditioned on 𝐱_0, i.e., q(𝐱_t-1|𝐱_t, 𝐱_0). It is noteworthy that q(𝐱_t-1|𝐱_t, 𝐱_0) is tractable and this makes optimisation of L viable, henceforth making the approximation of q(𝐱_1:T|𝐱_0) using p_θ(𝐱_0:T) viable. In the context of video generation, an arbitrary noisy sample 𝐱_t, t = T, ⋯, 1 sampled using Equation (<ref>) is fed to the deep neural network-based model ℳ, which is trained (by optimising Equation (<ref>)) to approximate the noise Σ_t imposed. When well trained, ℳ will be able to identify and model the noises, helping to remove the noise and restore data structures. Inspired by advancements in image and video generation, researchers have introduced various diffusion models. These models include those that utilise semantic masks as conditions to produce high-quality images <cit.>, among others. Semantic masks offer valuable information, such as object shapes and locations, making them ideal for generative tasks that prioritise retaining shape and spatial details. Denoting conditions like masks as 𝐲, Equation (<ref>) can be reformulated as: p_θ(𝐱_0:T|𝐲) = p(𝐱_T) ∏^T_t=1 p_θ(𝐱_t-1|𝐱_t, 𝐲), where p_θ(𝐱_t-1|𝐱_t, 𝐲) = 𝒩(𝐱_t-1; μ_θ(𝐱_t,𝐲, t), Σ_θ(𝐱_t,𝐲, t)). Since the condition 𝐲 applies to p_θ(𝐱_t-1|𝐱_t, 𝐲) for t=T,⋯,1, it is straightforward to substitute these terms involve 𝐲 into Equation (<ref>) to derive the optimization term for conditioned diffusion models: L = 𝔼_q [D_KL(q(𝐱_T |𝐱_0) ∥ p_θ(𝐱_T))_L_T + ∑_t=2^T D_KL(q(𝐱_t-1|𝐱_t, 𝐱_0) ∥ p_θ(𝐱_t-1|𝐱_t, 𝐲))_L_t-1 - log p_θ(𝐱_0 |𝐱_1,𝐲)_L_0]. When the model is well trained, it will take in a Gaussian noise image 𝐱_T ∼𝒩(0,𝐈) and `recreate' samples from it by removing the noise step by step. § SHAPE AND LOCATION RETENTION DIFFUSION MODELS §.§ Overall Architecture Figure <ref> shows the overall architecture of the proposed shape and location retaining diffusion models, as well as the inputs and outputs of the model. We have adopted the ConvNext <cit.> comprised of standard ConvNet modules as the backbone network, which has been proven to be efficient while still facing challenges of containing distorted objects in generated frames <cit.>. To this end, we have introduced semantic mask regulation and robot pose embedding into the module, to improve shape and location retention of such models. The mask regulation and robot pose embedding modules are depicted in Figure <ref> and Figure <ref>, respectively. More details are given as follows. §.§ Inputs and Robot Pose Embedding The inputs of the model include 1)  A condition frame 𝐱_0^n sampled from a video comprising N frames {𝐱_0^1,𝐱_0^2,⋯,𝐱_0^N}, along with a noisy frame 𝐱_t^n+Δ k where t denotes the diffusion steps of 𝐱_0^n, and Δ k represents the frame difference between 𝐱_0^n and 𝐱_0^n+Δ k. These frames are concatenated along the channel dimension as the first input. 2) The diffusion time steps t and frame index difference Δ k between the condition frame and the current frame are embedded following Equation (<ref>). γ(p) = (sin(2^0 π p), cos(2^0 π p), ⋯, sin(2^L-1π p), cos(2^L-1π p)), where p represents either t or Δ k. We have also embedded the robot pose difference vector (Δ x, Δ y, Δ z, Δϕ, Δθ, Δψ)^T:=Δ𝐏 into each ConvNext block, as shown in Figure <ref>. The motive behind this is to use robot pose information to guide model training, and when the model is trained, it will gain better shape and location retention performance in generated frames when conditioned on robot pose. In this paper, we use a linear embedding strategy for the pose difference vector embedding, i.e., Δ𝐏^' = 𝐀·Δ𝐏 + 𝐛. The motive behind this is the pose of the robot changes almost linearly as the time between two frames is short, e.g., 1 second, or 1/24 seconds. §.§ Mask Regulation As mentioned earlier, using diffusion models to generate data with shape and location retention will benefit dangerous human-robot interaction by avoiding collecting data directly from real cases, which faces legal and ethical challenges. Semantic masks, abundant in shape and location information, have become easily accessible with advancements in object segmentation models like the Segment Anything model <cit.>. Recognizing the potential benefits of leveraging semantic mask information, we propose incorporating it into ConvNext blocks to regulate intermediate outputs. Our approach introduces a new SPADE-based ConvNext block, outlined in Figure <ref>. Initially, frames and masks undergo separate processing through convolutional layers (conv2d), yielding outputs denoted as 𝐱 and 𝐦, respectively. Subsequently, the output 𝐦 undergoes further processing using the proposed SPADE block to regulate 𝐱. The SPADE block, as shown in Figure <ref>, is defined as follows. 𝐱= 𝐱⊗ f(γ) ⊕σ, where the symbol ⊗ indicates element-wise products, f(·) represents a mapping, 𝐱 is the output of the SPADE block, 𝐦 = Layernorm(Interpolate(𝐦)), γ=conv2d(𝐦), and σ=conv2d(γ). We use the Layernorm(·) module to retain information from all mask channels and the nearest neighbor interpolation method is used for the Interpolate(·) module to ensure the size of masks matches that of the frames. It is worth mentioning that the SPADE normalisation in Equation (<ref>) is different from <cit.> and <cit.> as we focus on using mask information to regulate intermediate outputs of ConvNext module such that shape and location information can be retained in video generation. §.§ Sampling In the sampling phase, the model is presented with a singular frame extracted from the video to generate subsequent frames interactively. This process continues until the desired number of frames has been produced. During each iteration, the model utilises the provided frame and conditions such as pose information and semantic masks to inform the generation of the subsequent frames, ensuring a coherent and sequential flow of frames in the generated video. § EXPERIMENTS §.§ Datasets Given the necessity of robot pose information for training the proposed models, we constructed our datasets accordingly. We employed ROS to control robots in diverse environments, capturing video footage at a frame rate of 24 frames per second (fps). Subsequently, we processed this footage to produce videos with a reduced frame rate of 1 fps, ensuring noticeable changes in the robot's pose. Our dataset comprises two types of robots: the Turtlebot Waffle Pi robot and an industrial collaborative robot, a.k.a. cobot. For the Turtlebot, we recorded videos in two laboratory environments: one with the robot and a simple background (Scene I) and the other with a more complex background (Scene II). Additionally, we recorded the translational and rotational velocities of the robot to calculate robot pose difference vectors. The frames of these videos were annotated to generate the necessary masks. We also created a third dataset (Scene III) featuring the cobot using a similar procedure to test the adaptability and robustness of our models. It is worth noting that our models focus on retaining the shape and location of objects of interest, such as robots, rather than super-resolution or high-resolution frame generation. Therefore, irrespective of the original frame sizes, we resized both the frames and masks to dimensions 256× 144 to optimise computational resources and accelerate training. This also facilitates fair comparisons with benchmark models. Our dataset is publicly accessible on https://app.roboflow.com/turtlebot-h8awtRoboflow. §.§ Models In this paper, we explore two types of conditions: masks and robot pose information. To comprehensively compare and understand how these different conditions impact the shape and location retention performance of diffusion models, we investigate three models: 1) Ours-Mask-Pose, where both masks and pose information are utilised as conditions; 2) Ours-Mask, where only masks are employed as conditions; and 3) Ours-Pose, where only pose information is used as a condition. SinFusion is employed as the benchmark model for performance evaluation and comparison. All three of our models utilise a backbone ConvNext consisting of 16 improved blocks, as depicted in Figure <ref>. To ensure fair comparison, the benchmark model also employs 16 blocks but lacks pose embedding and mask regulation. Additionally, our models feature several key distinctions: 1) When masks serve as conditions (in Ours-Mask-Pose and Ours-Mask models), they are subjected to regulation via the proposed SPADE module, as illustrated in Figure <ref>. 2) In instances where robot poses are employed as conditions (in Ours-Mask-Pose and Ours-Pose models), the difference in robot pose between two frames is embedded and integrated into the model, as depicted in Figure <ref>. Table <ref> presents some example data of the robot pose used for embedding. Notably, these data are retrieved from ROS, simplifying the access to robot pose information. All models were trained on a single Nvidia A100 GPU. For our models, the loss function in Equation (<ref>) is used for training, while SinFusion training employed Equation (<ref>). Training durations varied among the models: the Ours-Mask-Pose model required approximately 6.2 hours, Ours-Mask took around 5.7 hours, and Ours-Pose took approximately 3.75 hours. In comparison, the benchmark model SinFusion required approximately 3.72 hours for training. §.§ Evaluation Metrics Three metrics are employed to assess model performance. The Structural Similarity Index (SSIM) is utilised to evaluate frame generation quality across different models. SSIM is preferred over PSNR (Peak Signal-to-Noise Ratio) for two main reasons: Firstly, SSIM measures image similarity in terms of structural information, luminance, and contrast, providing a more comprehensive assessment compared to PSNR, which solely quantifies reconstruction quality by comparing pixel values between original and generated frames. Secondly, as our focus is on retaining the shape and location of objects in generated frames, SSIM offers a more relevant comparison metric since shape and location information is assessed at a structural level rather than at the pixel level. Shape-retention performance is evaluated by comparing the Hu moments of the i-th original frame 𝐦_orig^i with those of the i-th generated frame 𝐦_gen^i. Hu moments are seven real-valued descriptors chosen for their ability to capture essential shape properties of an object of interest. These moments offer a concise representation of shape features, encompassing characteristics such as orientation, scale, and skewness <cit.>. Equation (<ref>) is utilised to quantify the shape-retaining performance of diffusion models compared to the original video. The output of Equation (<ref>) indicates the dissimilarity between shapes in the generated frames and their corresponding original frames, with smaller values suggesting greater similarity. More information about Hu moments can be found in the https://stummuac-my.sharepoint.com/:b:/g/personal/55141653_ad_mmu_ac_uk/EfeilbajY9ZAvnlgC9egMysBxTxxkrnkdymiOv1tR1taVA?e=IvBenjSupplemental Materials. d^i = √(∑_j=1^7(𝐌_orig^i[j] - 𝐌_gen^i[j])^2), where d^i is the similarity between shapes of interest in the i-th original and generated frames, and 𝐌_orig^i[j] and 𝐌_gen^i[j] represent the j-th Hu moments of the i-th original and generated frames, respectively. The Intersection over Union (IoU) metric is utilised to assess the model's performance in retaining the robot's location. Rather than directly determining the precise location of the robot, we employ Equation (<ref>) to calculate the IoU between the masks of the robot in the i-th original frame 𝐦_orig^i and the mask of the robot in the i-th generated frame 𝐦_gen^i. This computation serves as an indicator of how effectively the location is preserved in the generated videos. IoU^i = 𝐦_orig^i ⋂𝐦_gen^i/𝐦_orig^i ⋃𝐦_gen^i §.§ Main Results To delve deeply into the impact of masks and poses on the performance of shape and location retention, we have first considered the masks of the two types of robots exclusively. Two sets of experiments were conducted, one using the Scene I datasets and the other using the Scene III datasets. The trained models from each dataset are employed to generate frames for evaluation. Figure <ref> displays some of the generated results from Scene III, with additional results available in the https://stummuac-my.sharepoint.com/personal/55141653_ad_mmu_ac_uk/_layouts/15/onedrive.aspx?id= Quantitative evaluation results using the three metrics are computed: 1) shape retention based on Equation (<ref>); 2) location retention based on Equation (<ref>); and 3) overall quality of generated frames based on SSIM. The SSIM results are depicted in Figure <ref>, while the shape and location retention results are illustrated in Figure <ref>. Regarding the overall quality of the generated frames, it can be observed from Figure <ref> that Ours-Pose achieves the best results, and Ours-Mask-Pose achieves comparable results, but both outperform the benchmark model. Regarding shape and location retention, it is evident from Figure <ref> that Ours-Mask-Pose achieves either the best or the second-best results in both aspects. Ours-Pose achieves comparable results with Ours-Mask-Pose in shape retention. In terms of location retention, Ours-Mask-Pose performs comparably with Ours-Pose and outperforms other models in both Scene I and Scene III. In conclusion, incorporating sole pose information or the combination of pose information with masks improves the performance of diffusion models compared to the benchmark model across all three metrics. However, considering only mask results does not always improve the performance compared to the benchmark models, which we assume is due to the exclusive use of robot masks. Further experiments are conducted to investigate this phenomenon. §.§ Ablation Study §.§.§ Considering Both Robot and Background Masks To further investigate the impact of masks on the generation results, additional experiments were conducted on Scene II and Scene III, using masks of both the robots and the backgrounds as conditions. The SSIM results are presented in Figure <ref>, while the shape and location retention results are depicted in Figure <ref>. It is evident that by considering both robot and background masks, the quality of generated frames by Ours-Mask has improved in terms of SSIM. In Scene II, Ours-Mask achieves comparable results with Ours-Pose or Ours-Mask-Pose, and in Scene III, it either slightly outperforms Ours-Pose and Ours-Mask-Pose or achieves comparable results. Regarding shape and location retention, improvements are observed with Ours-Mask as well, as shown in Figure <ref>. However, Ours-Pose and Ours-Mask-Pose still outperform Ours-Mask in both shape and location retention in both scenes. §.§.§ The implication of Shape and Location Retention Some examples of shape and location retention of the robots are provided in Figure <ref>. In Scene I, Ours-Mask-Pose keeps the shape of the robot better compared to the benchmark model. In Scene II, similar results are observed and the robot arm is broken into two in the generated frame by the benchmark model. The location retention is shown in the results from Scene III, this can be recognised from the relative location of the robot and the wall highlighted. Considering all experiments across the three scenes, it can be concluded that masks and pose information contribute to retaining the structural information of generated frames. In the meantime, it is important to highlight that models incorporating robot pose embedding only have consistently achieved comparable results in terms of location retention to those incorporating mask regulation, albeit with shorter training times. However, considering robot and/or background masks helps to improve the performance in shape retention and SSIM, but normally needs a longer model training time. Regardless, better performance has been achieved by the proposed models compared to the benchmark model. §.§ Discusions Considering the performance of our models and the benchmark models against the metrics SSIM, IoU, and Hu shape similarity, this work has attempted to provide a solution to assess model performance in shape and location retention. Experimental results in different scenes of two types of robots show that taking robot pose information and mask (robot masks, or both robot and background masks) as conditions help to achieve significant improvements in all scenes against the metrics. We believe that shape and location retention in generated frames will benefit hazardous human-robot interaction detection in generating data for detection model training and beyond, which will be investigated in future works. § CONCLUSIONS This paper introduces diffusion models that leverage robot pose and masks as conditional inputs for video generation. The objective is to produce video frames that maintain high structural fidelity, thereby enhancing the preservation of the shape and location information of objects within the generated frames. Through a series of experiments conducted across three distinct scenes involving various robots, we consistently observed improvements in generation quality as measured by SSIM, as well as in the retention of shape and location evaluated using Hu moments and IoU. These advancements hold promise for applications where accurate depiction of robot shape and location is crucial. For instance, our models can generate data to facilitate accurate dangerous human-interaction detection training, which will help mitigate potential risks associated with human-robot interactions. IEEEtran
http://arxiv.org/abs/2407.02777v1
20240703031004
Hierarchical Large Scale Multirobot Path (Re)Planning
[ "Lishuo Pan", "Kevin Hsu", "Nora Ayanian" ]
cs.RO
[ "cs.RO" ]
Foster Adaptivity and Balance in Learning with Noisy Labels Mengmeng Sheng10000-0002-2011-8597 Zeren Sun1()0000-0001-6262-5338 Tao Chen10000-0001-8239-1698 Shuchao Pang10000-0002-5668-833X Yucheng Wang20000-0002-8290-3291 Yazhou Yao1()0000-0002-0337-9410 July 8, 2024 ====================================================================================================================================================================================================== empty empty § ABSTRACT We consider a large-scale multi-robot path planning problem in a cluttered environment. Our approach achieves real-time replanning by dividing the workspace into cells and utilizing a hierarchical planner. Specifically, multi-commodity flow-based high-level planners route robots through the cells to reduce congestion, while an anytime low-level planner computes collision-free paths for robots within each cell in parallel. Despite resulting in longer paths compared to the baseline multi-agent pathfinding algorithm, our method produces a solution with significant improvement in computation time. Specifically, we show empirical results of a 500-times speedup in computation time compared to the baseline multi-agent pathfinding approach on the environments we study. We account for the robot's embodiment and support non-stop execution when replanning continuously. We demonstrate the real-time performance of our algorithm with up to 142 robots in simulation, and a representative 32 physical Crazyflie nano-quadrotor experiment. § INTRODUCTION Large fleets of robots, such as those used in warehouse operations <cit.>, disaster response <cit.>, and delivery <cit.>, demand coordination solutions that adjust in real time to changing goals. In this work, we present a real-time lifelong hierarchical method for navigating a large team of robots to independent goals in a large, cluttered environment that guarantees collision avoidance. By lifelong, we mean robots can enter and exit the space, and can receive another goal at any time, as they would in a warehouse or delivery problem. Our approach partitions the space into disjoint cells, allowing planning algorithms to run concurrently in parallel within each cell. A high-level planner routes robots through the partition, while a low-level anytime multi-agent pathfinding (MAPF) algorithm navigates robots to local goals within each cell in parallel. The real-time property holds as long as there are not too many cells or robots in a workspace; the limits for real-time operation are empirical and problem-specific, however, we demonstrate real-time performance for 142 robots in simulation with a 25-cell partition. We are particularly interested in unmanned aerial vehicles (UAVs) operating in 3D space, such as in city-scale on-demand UAV package delivery, however, our approach applies to robots operating in 2D as well. We present two approaches for high-level planning depending on the problem's requirements: 1) an egocentric greedy approach that always operates in real-time and 2) a novel high-level planner that routes robots through the partition using multi-commodity flow (MCF) <cit.>. There are tradeoffs between these two approaches. The egocentric greedy planner operates in real-time regardless of the number of cells; however, it has no mechanism for distributing robots, thus it can result in congestion and longer low-level planning times within some cells. On the other hand, the MCF-based approach eases cell congestion by regulating the flow of robots into each cell while ensuring bounded-suboptimal inter-cell routing; thus, it can be useful in environments such as urban UAV package delivery, where different types of cells (e.g., residential vs. highway) may have different limits on the influx of robots. The MCF-based planners can operate in real-time under certain conditions, thus allowing for lifelong replanning while reducing congestion, which leads to faster, real-time low-level planning within each cell. The low-level planner ensures collision avoidance while respecting the robots' geometric shape. A cell-crossing protocol allows robots to transition between cells without stopping in midair. Combined with the MCF-based planner, this allows real-time computation and safe, non-stop execution of multi-robot plans. The contributions of this work are: * a hierarchical framework for large-scale multi-robot real-time coordination that significantly reduces computation time compared to the baseline MAPF solver, while resulting in a moderately suboptimal solution; and * novel multi-commodity flow-based high-level planners, MCF/OD and one-shot MCF, that reduce congestion by regulating the influx of robots to each cell. We demonstrate the algorithm in simulation with up to 142 robots and in physical robot experiments with 32 nano-quadrotors in cluttered environments, shown in Fig. <ref>. § RELATED WORK Centralized approaches to multi-robot planning <cit.> face substantial computational challenges due to the theoretical hardness of the problem <cit.>, prohibiting real-time replanning and scaling to many robot systems. Decentralized approaches, on the other hand, can result in deadlocks, livelocks, congestion, collision, and reduced efficiency. RLSS <cit.> uses fast single-robot planners but relies on additional optimization to provide collision-free trajectories. In cluttered environments, a buffered Voronoi cell based algorithm <cit.> can result in deadlocks, and an algorithm using relative safe flight corridor <cit.> leads to collisions. In the present work, we aim to address these problems and facilitate real-time MAPF at the discrete planning phase. Search-based MAPF solvers generate high-quality, collision-free discrete paths but the complexity for optimal solutions scales exponentially with the number of agents <cit.>. Bounded-suboptimal algorithms have been proposed to overcome this complexity, but their poor scalability still prevents their application to real-time coordination for large teams. To address this, partition-based MAPF <cit.> divides the workspace into smaller regions, reducing the agent number within each cell. However, since the high-level planners are either single-agent based <cit.> or solving a multi-commodity flow problem constrained on single-robot shortest paths <cit.>, they lead to congestion and hard MAPF instances. Instead, we propose a novel inter-cell routing algorithm to distribute robots while maintaining bounded-suboptimality. Furthermore, the cell-crossing method in <cit.> is unsuitable for aerial vehicles due to energy consumption while waiting in place for the cell-crossing channel to clear. Our cell-crossing protocol addresses this drawback. § PROBLEM FORMULATION Consider a time-varying number of homogeneous non-point robots operating in workspace 𝒲, which is partitioned into a union of non-overlapping convex polytopic cells. Robots must reach specified individual goal positions, which change over time, while avoiding collisions with robots and obstacles and obeying maximum cell influx limits θ (influx refers to the number of robots entering a workspace cell). A motivating scenario is a multi-UAV package delivery system where the number of UAVs entering some types of airspace must be limited, and UAVs exit or enter the workspace to charge or redeploy. The problem above requires solving a flow problem that obeys cell influx limits while handling new goal positions and a varying number of robots. While that is the general problem we aim to solve, the present work addresses a critical subproblem: the underlying planner that is repeatedly called to safely and efficiently route the robots through 𝒲. Consider N (fixed) homogeneous non-point robots operating in a workspace 𝒲 that is partitioned into a union of non-overlapping convex polytopic cells. 𝒲 contains obstacles represented as unions of convex polytopes 𝒪_1, ⋯, 𝒪_N_obs. Let ℛ_ℰ(𝐩) be the convex set of points representing a robot at position 𝐩, i.e., a robot-environment collision model. The free space is represented as ℱ = 𝒲\ (⋃_h𝒪_h) ⊖ℛ_ℰ(0), where ⊖ denotes Minkowski difference. For each robot r^i (superscript i represents robot index) with initial position 𝐬^i∈ℝ^3, find paths for all robots to their goal positions such that there are no collisions (e.g., between robots or between robots and ⋃_h𝒪_h) and the total number of robots that enter each cell is less than its user-defined influx θ_m (influx limits can vary by cell). § PRELIMINARIES §.§ Multi-agent pathfinding (MAPF) Consider an undirected graph 𝒢=(V, E), and N agents. Each agent has a start v^i_s∈ V and goal v^i_g∈ V vertex. At each time step k, an agent can either move to a neighbor vertex (u^i_k, u^i_k+1)∈ E or stay at its current vertex u^i_k+1 = u^i_k, where u^i_k∈ V is the k-th vertex in i-th agent's path. To respect vertex conflict constraints, no two agents can occupy the same vertex simultaneously, i.e., ∀ k, i≠ j : u^i_k≠ u^j_k. To respect edge conflict constraints, no two agents can traverse the same edge in the opposite direction concurrently, i.e., ∀ k, i≠ j: u^i_k≠ u^j_k+1∨u^i_k+1≠ u^j_k. The objective is to find conflict-free paths 𝒫^i=[u^i_0, ⋯, u^i_T-1], where u^i_0= v^i_s and u^i_T-1= v^i_g for all agents, and minimize cost, e.g., the sum over the time steps required to reach the goals of all agents or the makespan T. §.§ Conflict Annotation and MAPF with General Conflicts Many works address MAPF for embodied agents <cit.>. We adopt multi-agent pathfinding with generalized conflicts (MAPF/C), due to its flexibility in incorporating different geometric shapes <cit.>. Conflict annotation identifies the extended conflict set of vertices and edges for each vertex and edge with respect to the inter-robot collision model. To account for the downwash effect between robots <cit.>, we use a robot-robot collision model ℛ_ℛ(𝐩) that is distinct from ℛ_ℰ(𝐩). We follow the MAPF/C function definitions <cit.>: conVV(v)= {u ∈ V |. .ℛ_ℛ(pos(u)) ∩ℛ_ℛ(pos(v)) ≠∅} conEE(e)= {d ∈ E |ℛ_ℛ^*(d) ∩ℛ_ℛ^*(e) ≠∅} conEV(e)= {u ∈ V |. .ℛ_ℛ(pos(u)) ∩ℛ_ℛ^*(e) ≠∅}, where pos(u) ∈ℝ^3 returns the position for vertex u. ℛ_ℛ^*(e) is the set of points swept by the robot when traversing edge e. This is called the swept collision model. § GEOMETRIC PARTITIONING Our framework has three components: geometric partitioning, high-level planner, and low-level planner, as depicted in Fig. <ref>. Geometric partitioning divides the workspace into disjoint convex cells, where the plan can be computed in parallel. A centralized high-level planner regulates the congestion for each cell while guaranteeing robots' inter-cell routing quality. An anytime low-level planner plans collision-free paths for robots within each cell. Initial planning includes pre-computation (the geometric partitioning), and replanning only involves high- and low-level planning. Once the initial plan is established, our algorithm runs in real-time for replanning. The geometric partitioning of a bounded workspace consists of three steps: 1) roadmap generation, 2) graph partitioning and spatial linear separation, and 3) local goal generation. §.§ Roadmap Generation A roadmap is an undirected graph 𝒢=(V, E) embedded into a Euclidean space, where each vertex v∈ V corresponds to a position in ℱ and each edge (u, v) ∈ E denotes a path in ℱ connecting u and v. We additionally require the existence of vertices v^i_g and v^i_s, corresponding to the goal and start positions, 𝐠^i and 𝐬^i, respectfully. The roadmap should satisfy three properties: 1) connectivity-preserving, i.e., if a path between two points in ℱ exists, there should be a path in the roadmap as well; 2) optimality-preserving, i.e., the shortest path between two points in ℱ can be well approximated by a path in the roadmap; and 3) sparse, i.e., have a small number of vertices and edges. In our experiments, we use a 6-connected grid graph as roadmap, however, it can be generated by other methods, such as SPARS <cit.>. §.§ Graph Partitioning and Spatial Linear Separation Our method partitions the workspace into disjoint convex cells and within each cell solves a MAPF instance. Spatial decomposition has two benefits: 1) fewer robots for each smaller sub-graph, and 2) the decomposed MAPF instances can be solved in parallel. The initial partitioning can adopt any approach. We propose a geometric method that generates Q convex polytopes. First, we use graph partitioning (KaHyPar <cit.>) to group the roadmap into Q balanced sub-graphs 𝒢_m=(V_m, E_m), for m=1,…,Q. Balanced means the vertex number difference between sub-graphs is bounded, leading to cells of similar volume and an even spread of robots. Despite the sub-graphs now being balanced, we further enforce each cell, containing a sub-graph, to be a convex polytope. To do so, we use soft-margin support vector machines (SVM) <cit.> to compute the separating hyperplane H_ml between sets of vertices of 𝒢_m and 𝒢_l, and reassign misclassified vertices. The resulting set of separating hyperplanes form a cell. Convexity prevents robots from penetrating into neighboring regions while traversing within a cell, thus isolating cell planning. §.§ Local Goal Generation For navigating out of a cell, we generate candidate goal states on the faces between adjacent cells. For each cell P_m, we uniformly sample random local goals on the hyperplane H_ml and add them as shared vertices to both 𝒢_m and 𝒢_l. Despite being shared vertices, local goals generated by partition P_m have in-edges from P_m and out-edges to P_l to avoid collision during cell transit. Thus, this part of the graph is directed. To enable parallel computation, there must be no communication between cells. Thus, the cell roadmap 𝒢_m is modified such that the planned paths are collision-free when crossing cells without information exchange between cells. Thus, the following properties should be satisfied: P1: Robots avoid collision when stationary at vertices (local goals and non-local-goal vertices) of different cells (generalized vertex-vertex conflict across cells), i.e., ∀ i, j, m≠ l: v^i_m∉conVV(v^j_l), where the superscript in the v^i_m refers to the vertex index and the subscript refers to cell index. P2: Robots avoid collision when traversing edges within different cells (generalized edge-edge conflict across cells), i.e., ∀ i, k, m≠ l: e^i_m:=(v^i_m, v^j_m) ∉conEE(e^k_l), here the superscript in e^i_m refers to the edge index. P3: Robots avoid collision when one robot is stationary at a vertex while the other robot is traversing an edge of a different cell (generalized edge-vertex conflict across cells), i.e., ∀ i, k, m≠ l: e^i_m:=(v^i_m, v^j_m), v^k_l∉conEV(e^i_m). We depict violations of P1-P3 in Fig. <ref>b-<ref>d. To prevent conflicts between local goals and non-local-goal vertices across cells, we buffer the separating hyperplane by the inter-robot collision configuration, 𝒞_col (c.f. Fig. <ref>a), which is computed as 𝒞_col(𝐩) = ℛ_ℛ(𝐩) ⊕ℛ_ℛ(0), where ⊕ is the Minkowski sum. The buffering is achieved by modifying the offset ℋ_a^' = ℋ_a + max_𝐲∈𝒞_col(0)ℋ_𝐧·𝐲 of the hyperplane, where ℋ_a and ℋ_𝐧 are the offset and the normal vector of the hyperplane. Given the buffered hyperplanes, all non-local-goal vertices within the buffered region are removed, guaranteeing collision avoidance between local goals and non-local-goal vertices. The local goals sampled on the hyperplane are confined within the blue region in Fig. <ref>a to avoid vertex-vertex conflicts between local goals of different separating hyperplanes. The buffering satisfies P1 between non-local-goal and local-goal vertices across cells and P1, P2, and P3 between non-local-goal vertices across cells. To satisfy P1 between local goals across cells, we uniformly randomly sample points on the hyperplane and reject those that violate P1. For P2 and P3 between non-local-goal and local goal vertices, we add the connection between the sampled local goal to its neighbor non-local-goal vertices within a radius on both sides of the hyperplane. The local goal is removed if any connected edge violates P2 or P3 (see Fig. <ref>c, <ref>d). Otherwise, the edges are added to the cell roadmaps. Figure <ref> depicts an exemplar geometric partition. § MULTI-COMMODITY FLOW WITH OPTIMAL DETOUR Our hierarchical approach relies on high-level planning to 1) regulate cell congestion and 2) preserve the bounded-suboptimality of inter-cell routing solutions. Thus, our high-level planner simplifies MAPF instances within cells and leads to real-time replanning. We abstract the partition as a directed graph 𝒢_p=(V_p, E_p), where the vertices (nodes) represent cells and edges connect neighboring cells that share at least one face. Edges are weighted according to Euclidean distance between the cells' centers of mass. The high-level planner finds a routing 𝒰^i=[ P^i_s, ⋯, P^i_g] for each robot r^i, where P^i_s and P^i_g are its start and goal cell, satisfying: 1) the influx of cell m is under a user-defined value θ_m, and 2) cost(𝒰^i)≤ w_mcf· cost(𝒰^i,*). Here, w_mcf≥ 1 is a scalar representing the suboptimality bound for the routing solutions. g^i,* is the optimal inter-cell routing of robot r^i. The influx of a node represents the number of robots entering the cell; we define influx formally in the following section. §.§ High-level Planning Formulation The SOTA partition-based MAPF solvers <cit.> suffer from cell congestion, leading to hard MAPF instances in certain cells, causing computational bottlenecks. To address this, we formulate the inter-cell routing as a variant of the MCF problem and propose multi-commodity flow with optimal detour (MCF/OD), which optimally distributes robots among cells such that the number of robots entering any intermediate cell m (not the start or goal cells of the robot teams) is under a user-defined value θ_m, if solution exists. Specifically, robots sharing the same start cell P_s and goal cell P_g are one commodity c_sg. The commodities set C={c_1, ⋯, c_O} includes all commodities given the robot positions. Solving the MCF problem results in optimal flows { y^*_sgml}, that is the number of robots in commodity c_sg∈ C traversing along edge e_ml∈ E_p. In our minimal influx MCF formulation, the optimal flow solutions lead to minimized intermediate cell influx and the most dispersed routing. The cell influx of P_l is defined as the total number of robots entering this cell, that is, ∑_c_sg, e_ml y_sgml. By constraining the flows y_sgml to the shortest paths between the start and goal cells SP_sg, as in (<ref>), (<ref>), we guarantee solution optimality. We formulate the minimal influx MCF problem as the following integer linear program (ILP): rCl'rCl'rCl _ {y_sgml} α   · ∑_c_sgℒ_sg + β · ℒ_in s.t.∑_e^ml,e^ln∈E_p y_sgml - y_sgln = |c_sg|, l = g -|c_sg|, l=s 0, o.w. , ∀c_sg ∈C ℒ_sg ≥y_sgml, ∀e_ml∈E_p, ∀c_sg ∈C ℒ_in ≥𝐈^⊤_v^i𝐲, ∀v^i ∈V_p y_sgml ∈[0, |c_sg|], ∀e_ml∈SP_sg, ∀c_sg ∈C y_sgml = 0, ∀e_ml∉SP_sg, ∀c_sg ∈C where ℒ_sg represents the maximum flow for commodity c_sg∈ C, and ℒ_in represents the maximum influx among all the cells. By minimizing both, the objective function penalizes the maximum congestion among all cells and disperses the flows related to one commodity. We weight ℒ_sg and ℒ_in with coefficients α and β, respectively, and set β≫α to prioritize minimizing ℒ_in. In (1d), 𝐈_v^i = [ I_y_1, ⋯, I_y_F], where y_f∈{y_sgml}, I_y_f∈{0, 1} is an indicator function that returns 1 when y_f has positive flow to an intermediate vertex v^i∈ V_p. 𝐲 is the vector of all flows. Despite the minimized influx to the intermediate cells, the minimal influx MCF formulation leads to congestion in certain cells due to tight constraints on the shortest paths (robots' shortest paths may intersect at certain cells). To detour the robots optimally, ensuring that the number of robots entering any cell m remains below its influx limit θ_m, we present a complete and optimal solver, MCF/OD, in Algo. <ref>. It maintains a conflict tree and resolves congestion iteratively. The function congestionDetection(𝒢_p, P.solution, θ) computes the influx for each cell in 𝒢_p, given the flow solution P.solution and returns the set of cells with their influx larger than the corresponding limit. The function getAllConflict(P) returns the set of all commodities passing congested cells. MCF/OD is complete on a locally finite graph. The cost of a conflict tree node equals the sum of the costs of the longest routing (without cycles) in all commodities. For each expansion, k-th shortest paths will be added to the commodity, which means the cost of the conflict tree is monotonically non-decreasing. For each pair of costs X < Y, the search will expand all nodes with cost X before it expands the node with cost Y. As the graph is locally finite, there are a finite number of routing with the same cost for each commodity. Thus, expanding nodes with cost X requires a finite number of iterations. To include an arbitrary combination of Ẑ unique edges of all commodities, the minimal cost of the conflict tree node is Z. Z is finite, as the worst-case scenario is to include all the cell routing within the suboptimality bound. Since we are considering a graph with well-defined edge weights and a finite number of commodities, the worst cost is finite. For a finite cost Z, because the conflict tree node cost is monotonically non-decreasing and only a finite number of nodes with the same cost exists, we can find arbitrary combinations of Ẑ unique edges in finite expansions. Thus, if a solution exists by including a combination of Ẑ unique edges in the MCF, the algorithm can find it within finite expansions. If all the unique edges have been added to the MCF solver and the optimization cannot find the solution that satisfies the user-defined influx limit, the problem is identified as unsolvable. MCF/OD is optimal. If a solution is found, it will have the lowest possible cost, i.e., the sum of the costs of the longest routing in all commodities will be minimized if a solution is found. MCF/OD is a best-first search. In each expansion, the k-th shortest path for the selected commodity is inserted. Thus, the cost of a descendant node is monotonically non-decreasing. Therefore, if a solution is found, it is the optimal solution w.r.t. the cost. MCF/OD can find the optimal detouring solution. However, the complexity is high due to solving an ILP in each expansion. To tackle many commodities in a large partition, we propose another efficient detour algorithm, one-shot MCF, which solves MCF once. One-shot MCF augments the shortest paths in (<ref>), (<ref>) to include all the w_mcf bounded-suboptimal paths for each commodity. We employ the k-th shortest path routing algorithm to find all the candidate paths. The proposed One-shot MCF is complete as it includes all bounded-suboptimal paths for each commodity. While it does not optimize for the routing length for all commodities, it optimizes for minimum influx. Intuitively, MCF/OD adds bounded-suboptimal paths iteratively to relax the constraints and terminates once all cell influx limits are satisfied. On the other hand, One-shot MCF adds all bounded-suboptimal paths at once and optimizes for the minimum influx, so it could result in unnecessary detour. In each high-level planning iteration, we run both MCF/OD and One-shot MCF in parallel. If MCF/OD times out, we use the solution generated from One-shot MCF. The high-level replanning happens every δ_h time interval. § LOW-LEVEL PLANNER Within each cell, the low-level planner, or the cell planner, computes collision-free paths that navigate robots to their local goals in an anytime fashion. The cell planner can be divided into three steps: 1) local goal assignment to determine the goal for each robot in a cell, 2) anytime MAPF/C to generate discrete paths, and 3) cell-crossing protocol for non-stop transiting between cells. §.§ Local Goal Selection At the beginning of cell planning, local goal selection aims to assign robots to the closest local goals while spreading out robots in an optimal manner by solving the following ILP: rCl'rCl'rCl _𝐀 ∑_ij A_ij ·D_ij + α∑_j u_j + βU s.t. ∑_j A_ij = 1,  ∀i U ≥u_j ≥∑_iA_ij - 1,  ∀j, where A_ij∈{0,1} indicates if robot r^i is assigned to local goal lg^j, and D_ij is the Euclidean distance between r^i and lg^j. Auxiliary variables u_j in the objective function minimize the number of robots queueing at local goal lg^j, prioritizing filling less congested local goals first. Auxiliary variable U in the objective function minimizes the maximum number of robots waiting in queue among all the local goals. This leads to evenly routing robots to different local goals to reduce congestion. A local goal is occupied if assigned with at least one robot. Thus, the number of robots waiting in queue for a local goal lg^j is (∑_i A_ij -1). §.§ Anytime MAPF/C We adopt anytime MAPF, which iteratively improves solution quality until a solution is needed, to facilitate real-time replanning. We use the current SOTA anytime MAPF LNS <cit.>, which iteratively replans for a subset of robots and adapts to an improved solution if found. We use ECBS <cit.> as the initial planner as it provides a bounded-suboptimal solution and prioritized planning with SIPP <cit.> to rapidly iterate plans. We extend both ECBS and SIPP using MAPF/C to account for generalized conflicts. For priority planning with SIPP , we propose the following SIPP with generalized conflicts algorithm. §.§.§ SIPP with generalized conflicts SIPP compresses the time dimension into sparse safety intervals to significantly reduce the search space. The SIPP configuration augments the position with its safety intervals. In the resulting configuration space, A^* finds the shortest path for a robot. Planned robots are considered moving obstacles and modify the safety interval of the traversed states. We propose SIPP with generalized conflicts (SIPP/C) with the following modifications and a different getSuccessors(s) algorithm, where the highlighted part differs from the original algorithm. In SIPP/C, the collision intervals, the complements of safety intervals, are added to vertices and edges based on the following vertex-vertex, edge-edge, and vertex-edge conflicts: SIPP/C vertex-vertex conflict: for a robot at the vertex u_k in a planned path at time k, we add the collision interval [k,k] to vertices and edges that intersect with the robot-robot collision model at pos(u_k), i.e., ℛ_ℛ(pos(v)) ∩ℛ_ℛ(pos(u_k)) and ℛ^*_ℛ(e) ∩ℛ_ℛ(pos(u_k)), ∀ v,e. Note here, we use the swept model for ℛ^*_ℛ(e) in SIPP. SIPP/C edge-edge conflict: for a robot traversing an edge e_k:=(u_k, u_k+1) at time k, we add the collision interval [k, k] to vertices and edges that intersect with the robot's swept collision model along edge e_k, i.e., ℛ_ℛ(pos(v)) ∩ℛ^*_ℛ(e_k) and ℛ^*_ℛ(e) ∩ℛ^*_ℛ(e_k), ∀ v,e. SIPP/C edge-vertex conflict: for a robot to be stationary at the vertex u_k=u_k+1, we add the collision interval [k, k] to vertices and edges that intersect with the robot's swept collision model along edge e_k:=(u_k, u_k+1), i.e., ℛ_ℛ(pos(v)) ∩ℛ_ℛ(pos(u_k)) and ℛ^*_ℛ(e) ∩ℛ_ℛ(pos(u_k)), ∀ v,e. Note that this degenerates to vertex-vertex conflict. In Algo. <ref>, E(s) is the action space at state s. Given the discrete path of robot r^i, we assign a time t_k = kΔ t to each discrete timestep and obtain the path f^i. Δ t is a user-defined value to satisfy the robot's dynamic constraints. The low-level replanning happens every δ_l time interval. §.§ Cell-crossing Protocol A robot idles at a local goal if the path in the next cell is not yet computed. To allow non-stop execution, we propose a cell-crossing protocol that results in robots always having a planned path to execute, even when traversing between cells. We buffer the hyperplane H_ml by a distance d_e towards cell P_m. All robots within the buffer will compute paths for their next cell before leaving the current one. Buffering is achieved by changing the hyperplane offset to ℋ^'_a = ℋ_a - ℋ_𝐧· d_e. By enforcing the buffer distance d_e≥δ_l· V_max, where V_max is the maximum speed of the robot, the robot is guaranteed to have a plan computed at least once before leaving its current cell. A robot entering the buffer zone will then have a plan to exit its current cell and transition through its next cell. The robot concurrently computes plans for its current and next cell, then concatenates them to form a complete transition plan. The robot then fixes this plan to lock the local goal and expected arrival time to next cell. Thus, when computing the plan for the next cell, the robot's expected start time and position will be pre-determined and independent of cell planning order. Fig. <ref> depicts our cell-crossing protocol. § RESULTS AND DISCUSSION We now demonstrate the system in experiments on simulated and physical robots. For large-scale simulated robot experiments, we create a confined 3D space with random obstacles uniformly generated on a disk of radius 10m. To validate the algorithm's scalability, we scale up the number of robots and the corresponding workspace size to maintain the robot density in different experiment instances. The “Circle74” workspace is 20m× 20m× 8m. We generate 74 robots whose start states form a circle with a 10m radius at the height of 1m and are centered at the x-y plane's origin, depicted in Fig. <ref>a. In “Circle142” we scale x and y dimensions by √(N) to 27.7m× 27.7m× 8m. The robots start in concentric circles with 13.85m and 11.85m radii at 1m high, centered at the x-y plane's origin, shown in Fig. <ref>b. In “Demo32" we model the occupancy map of our cluttered lab environment. It is 12.55m× 7.63m× 2.8m. We run 32 Crazyflies with initial x-y positions uniformly on an ellipse at 1m high, shown in Fig. <ref>c. The goal states are the antipodal points on the circle (or ellipse). We construct the roadmap using a 6-connected grid graph with an edge length of 1.6m for large-scale simulation and 0.7m for the lab environment. For simplicity, we set the same influx limit θ_m= 20 for all cells in high-level planning and a suboptimal bound w_mcf = 2 for both MCF/OD and One-shot MCF algorithms. We set the high-level planning time interval δ_h = 5s for “Circle74”, δ_h = 3s for “Circle142”, and δ_h = 10s for “Demo32”. We set the low-level planning time interval δ_l = 1s. For the LNS planner with random neighborhood selection, we use ECBS as the initial planner and prioritized planning with SIPP as the iterative planner, both planners we extend with MAPF/C. To account for downwash, we use an axis-aligned bounding box to represent the inter-robot collision model ℛ_ℛ(0). In simulations, i.e., “Circle74” and “Circle142”, we use the axis-aligned bounding box from [-0.12m, -0.12m, -0.3m]^⊤ to [0.12m, 0.12m, 0.3m]^⊤. Since “Demo32” is more dense, we use the bounding box from [-0.12m, -0.12m, -0.2m]^⊤ to [0.12m, 0.12m, 0.2m]^⊤. We use the same shape representation for the robot-environment collision model ℛ_ℰ(0). All experiments run on an Intel i7-11800H CPU computer. Fig. <ref> depicts typical solutions of the proposed algorithm. We summarize quantitative results in Table <ref>, where MCF refers to MCF/OD and One-shot MCF running in parallel, as described in Sec. <ref>. Note that, as | V| and |E| suggest, the number of vertices and edges increases after partitioning as we add local goals and corresponding edges. With a small computational overhead t̅_high, the proposed high-level planner effectively reduces the congestion among cells compared to both greedy and partitionless baseline approaches, by inspecting N̅_max. Here t̅_high is the average high-level planning time and N̅_max is the averaged maximum number of robots in a cell throughout the whole execution. By increasing the number of cells Q, the algorithm significantly reduces MAPF computation time. Specifically, for the average low-level planning time t̅_low, in instance “Circle74", the proposed algorithm runs 616-times faster than the baseline method and 544-times faster in instance “Circle142". Both the average low-level replanning time t̅_low and the averaged maximum low-level replanning time t̅^max_low are within the real-time regime for all instances. Since we use an anytime algorithm, we only record its initial planning time in the Table, which can be directly compared to the baseline. As we would expect, while performing in real-time, the algorithm yields suboptimal solutions compared to the baseline MAPF solution, according to the average makespan T̅. Because the partitioning invalidates the global optimality of MAPF algorithms, and the high-level planner detours robots among the partition, lengthening planned paths. §.§ Effectiveness of Partition in Low-level Planning Fig. <ref>a shows a quantitative evaluation of the low-level planning time and its solution makespan when changing the number of cells in “Circle74". As a robust statistic, we measure the median low-level planning time. As the result suggests, the logarithm of low-level planning time decreases significantly as we increase the number of cells. The result indicates that by partitioning the workspace and parallelizing computation, the low-level planning time decreases significantly. Furthermore, the makespan increases at the beginning then plateaus as the number of cells increases. This is expected since as the number of cells increases, first the high-level planner is more effective in detouring the robots to reduce congestion. At a certain point, no more detour is required since congestion is regulated. Additionally, the partition breaks the global optimality of the MAPF planner, leading to degeneracy in solution quality. Our algorithm runs MAPF in parallel for all cells; in Fig. <ref>b, we investigate the utilization of multi-threading. The orange dashed curve represents the theoretical lower bound of the total low-level planning time (assuming no overhead is introduced when allocating the computation to different threads). We observe the algorithm computation time generally follows the same curve, while the gap between the theoretical lower bound and experimental computation time increases as we increase the number of threads, indicating an increase in overhead for parallel computation in more cells. §.§ Effectiveness of High-level Planning To demonstrate the effectiveness of our high-level planner, we compare the cell congestion of the proposed MCF-based algorithm to an egocentric greedy approach. The greedy planner outputs a single-robot-based shortest inter-cell routing without considering other robots' routing, and may lead to congestion in certain partitions. Additionally, we compare the MCF-based approach with the baseline by imposing the partition into the workspace. In Table. <ref>, we report the average high-level computation time t̅_high and the averaged maximum number of robots in a cell throughout the whole execution N̅_max. The complexity of a MAPF instance scales poorly with the number of robots. Thus, N̅_max is a good indicator of the computational hardness of a MAPF instance. The computation time for greedy planning is instant while the MCF-based methods have additional overhead that increases with the number of cells, since they are centralized. For all instances in this paper, MCF-based methods provide real-time solutions. The MCF-based methods reduce the congestion compared to greedy and baseline methods, according to N_max. Although the advantage to the baseline is minor in the current experiment setup, the MCF-based approach can further reduce the congestion by relaxing bounded-suboptimality constraints, i.e., increase w_mcf. For a fair comparison of low-level planning time, both greedy and MCF-based utilize the same number of threads in computation. Compared to the greedy approach, the proposed MCF-based high-level planning yields more efficient low-level planning, which brings computation time to real time while maintaining the solution quality, i.e., makespan. §.§ Scalability of the Proposed Algorithm To investigate the scalability of the proposed algorithm, we run simulations on increasing numbers of robots (c.f., Table. <ref>). Note that the proposed algorithm achieves real-time performance in all instances as we scale up the number of robots, by looking at the averaged maximum high-level and low-level replanning time t̅^max_high and t̅^max_low. Due to the hierarchical approach, the proposed algorithm can further scale to larger teams with more cells and CPU threads. The computation bottleneck is the centralized high-level planning in larger scale problems, which we aim to address in future work. §.§ Physical Robots Fig. <ref> shows a representative experiment with 32 physical Crazyflies in a cluttered environment (video link: <https://youtu.be/ftdWVpLkErs>). We use a Vicon motion capture system to localize and CrazySwarm <cit.> to control the robots. In the experiment, initial positions are uniformly spaced on an ellipse within the workspace, and the goal positions are the antipodal points on the ellipse. Three column obstacles are placed within the ellipse. The experiment demonstrates that the proposed algorithm distributes robots effectively through the workspace and achieves real-time replanning. § CONCLUSION AND FUTURE WORK We have introduced a hierarchical path planning algorithm for large-scale coordination tasks. Our algorithm significantly reduces the computation time and suits on-demand applications such as drone delivery despite yielding a suboptimal solution. The framework achieves real-time operation by dividing the workspace into disjoint convex cells; within each, an anytime MAPF planner computes collision-free paths in parallel. Our high-level planner regulates the congestion of regions while guaranteeing the routing quality. Additionally, our algorithm considers the robot's geometric shape constraints in continuous space, and we run experiments with the collision model of a quadrotor with downwash. We also devise a cell-crossing protocol, which guarantees the robot always has a plan, even when transiting between cells, and allows replanning in a continuous time domain. The proposed algorithm is designed for lifelong replanning. In our experiments, goals are chosen from a pre-determined set, however, it can be extended to a general lifelong replanning as we add and delete goals online. When new goals are requested, the algorithm can identify the new goal's cell, and run conflict annotation steps in parallel for each new goal. While the MCF-based high-level planner operates in real-time in our experiments with up to 142 robots, the limits of its real-time operation are not well defined and depend on the number and density of robots and cells in the space. Future work will explore distributed MCF <cit.> to increase scalability of the system with real-time, distributed operation regardless of the number of robots and cells. Furthermore, we aim to solve real time large scale motion planning, which respects the robot's kinodynamic constraints and plans in continuous space and time. § ACKNOWLEDGEMENTS The authors would like to thank Baskın Şenbaşlar, Eric Ewing, Yuan Yuan, Calvin Luo, Yutong Wang, Arjun Prakash for their help in revising the manuscript. IEEEtran
http://arxiv.org/abs/2407.02264v2
20240702134056
SOAF: Scene Occlusion-aware Neural Acoustic Field
[ "Huiyu Gao", "Jiahao Ma", "David Ahmedt-Aristizabal", "Chuong Nguyen", "Miaomiao Liu" ]
cs.CV
[ "cs.CV", "cs.SD", "eess.AS" ]
Footprints of Data in a Classifier Model: The Privacy Issues and Their Mitigation through Data Obfuscation Payel Sadhukhan* Tanujit Chakraborty Received: date / Accepted: date ========================================================================================================== § ABSTRACT This paper tackles the problem of novel view audio-visual synthesis along an arbitrary trajectory in an indoor scene, given the audio-video recordings from other known trajectories of the scene. Existing methods often overlook the effect of room geometry, particularly wall occlusion to sound propagation, making them less accurate in multi-room environments. In this work, we propose a new approach called Scene Occlusion-aware Acoustic Field (SOAF) for accurate sound generation. Our approach derives a prior for sound energy field using distance-aware parametric sound-propagation modelling and then transforms it based on scene transmittance learned from the input video. We extract features from the local acoustic field centred around the receiver using a Fibonacci Sphere to generate binaural audio for novel views with a direction-aware attention mechanism. Extensive experiments on the real dataset RWAVS and the synthetic dataset SoundSpaces demonstrate that our method outperforms previous state-of-the-art techniques in audio generation. Project page: <https://huiyu-gao.github.io/SOAF/>. § INTRODUCTION We live in a world with rich audio-visual multi-modal information. Audio-visual scene synthesis enables the generation of videos and corresponding audio along arbitrary novel camera trajectories based on a source video with its associated audio. This task involves reconstructing the audio-visual scene both visually and acoustically from recorded real-world source videos with binaural audio along known camera trajectories. Specifically, it entails synthesising the images a person would see and the sounds that a person would hear while navigating within the scene from any novel position and direction along an arbitrary camera trajectory. Neural Radiance Fields (NeRF) <cit.> has made significant progress in the field of computer vision over the past few years. NeRF uses Multi-Layer Perceptrons (MLP) to learn an implicit and continuous representation of the visual scene and synthesise novel view images through volume rendering. While NeRF has been extensively explored in the field <cit.>, these methods focus solely on the visual aspect of the input video, ignoring the accompanying audio track. However, the world we live in contains multi-modal information. Most videos we capture include not only visual images but also sound signals. Therefore, investigating novel view acoustic synthesis is crucial to providing more immersive experiences for users in various AV/VR applications. Recently, Neural Acoustic Field (NAF) <cit.> became the first work to explore the application of implicit representation in sound field encoding. Similar to NeRF, NAF uses an MLP to learn a continuous function of the neural acoustic field, optimising it by supervising the generated Room Impulse Response (RIR) in the time-frequency domain at different emitter-listener location pairs and view directions. For the audio-visual scene synthesis task, AV-NeRF <cit.> is the first multi-modal approach to address this problem. They utilise the vanilla NeRF <cit.> for novel view synthesis and integrate the rendered novel view image and depth map as visual and geometric cues into audio generation. While AV-NeRF <cit.> has demonstrated promising results using multi-modal data, it only renders a single image and depth map for the novel view, providing limited visual and geometric information about the scene. Moreover, as illustrated in Figure <ref>, previous methods <cit.> do not fully explore the geometry of the scene, particularly the occlusion caused by walls which affect sound propagation and can be extracted from input videos. In this work, we explicitly model the effects of room geometry and occlusions on spatial audio generation, enhancing the ability to model sound propagation in large scenes, especially those with multiple rooms and walls. More specifically, the sound energy attenuates over distance and is reflected off or absorbed by surfaces as it propagates through space <cit.>. The sound energy received at a 3D position is determined by the full 3D scene geometry. Therefore, we first learn a NeRF <cit.> to provide the implicit 3D scene geometry. To better model the sound field, we derive a scene occlusion-aware prior, termed the global acoustic field, based on distance-aware parametric sound propagation modelling centred at the sound source and transformed by the learned transmittance from NeRF. We then extract the feature from the local acoustic field around the receiver using a Fibonacci Sphere, followed by a direction-aware attention mechanism to obtain features. These derived priors and features are used to generate binaural audio at novel views, demonstrating superior performance. In summary, our contributions are as follows: (i) We adopt the transmittance reshaped global prior for sound energy, enabling us to explicitly model scene occlusion on audio generation. (ii) Our direction-aware attention mechanism effectively captures useful local features for binaural audio generation. Extensive experiments on synthetic and real datasets, such as SoundSpace and RWAVS, demonstrate the superior performance of our approach compared to existing methods. § RELATED WORK Neural Radiance Fields and Implicit Surface. NeRF <cit.> has emerged as a promising representation of scene appearance and has been widely used in novel view synthesis. Subsequent works <cit.> extend NeRF in various aspects, including faster training <cit.>, faster inference <cit.>, and handling in-the-wild images <cit.>. However, these methods struggle to extract high-quality surfaces due to insufficient surface constraints during optimisation. To solve this issue, NeuS <cit.> and VolSDF <cit.> propose utilising the signed distance function (SDF) as an implicit surface representation and develop new volume rendering methods to train neural SDF fields. Some following works like MonoSDF <cit.> and NeuRIS <cit.>, demonstrate the effectiveness of incorporating monocular depth priors <cit.> and normal priors <cit.> as additional geometric cues for learning implicit surface representation of indoor scenes from sequences of scene images. In our work, we adopt neural SDF fields for visual data synthesis which provides high-quality scene geometry for our occlusion-aware audio novel view synthesis. Acoustic Fields. The representation of spatial sound fields has been studied extensively. Previous methods have either directly approximated acoustic fields with handcrafted priors <cit.> or focused solely on modelling perceptual cues with a parametric representations <cit.>. However, these methods often rely on strong assumptions. In recent years, researchers have shifted towards learning sound fields directly from data using neural networks. For instance, NAF <cit.> is the first method to leverage implicit representation to learn a neural acoustic field of RIR via an MLP. Unlike previous methods <cit.> that capture scene acoustics with handcrafted parameterisations, implicit representation encodes the scene acoustics in a more generic manner, enabling application to arbitrary scenes. INRAS <cit.> extends NAF by learning disentangled features for the emitter, scene geometry, and listener with known room boundaries. It reuses scene-dependent features for arbitrary emitter-listener pairs to generate higher fidelity RIR. Although INRAS integrates the scene environment by calculating the relative positions of the emitter and listener to scene boundary points, it still struggles to fully model the effect of scene structure on sound propagation. In contrast, we propose to utilise the high-quality geometry obtained from the neural SDF fields and explore sound energy priors based on this geometry to enhance audio generation. Audio-visual Learning. Several recent works <cit.> have explored learning acoustic information from multimodal data information for different tasks, including sound localisation <cit.>, audio-visual navigation <cit.>, visual-acoustic matching <cit.>, dereverberation <cit.>, and audio separation <cit.>. For novel view acoustic synthesis, ViGAS <cit.> combines auditory and visual observation from one viewpoint to render the sound received at the target viewpoint, assuming the sound source in the environment is visible in the input image and limited to a few views for audio generation. NACF <cit.> integrates multiple acoustic contexts into audio scene representation and proposes a multi-scale energy decay criterion for supervising generated RIR. Few-shotRIR <cit.> introduces a transformer-based model to extract multimodal features from few-shot audio-visual observations and predicts RIR for the queried source-receiver pair with a decoder module. BEE <cit.> reconstructs audio from sparse audio-visual samples by integrating obtained visual feature volumes with audio clips through cross-attention and rendering sound with learned time-frequency transformations. AV-NeRF <cit.> synthesises novel view audio by leveraging visual features extracted from images rendered from the novel view. In contrast, our approach learns a neural SDF field from input videos to represent scene geometry and explicitly models the effect of scene structure on sound propagation for more realistic spatial audio generation. § TASK DEFINITION The task of audio-visual scene synthesis aims to synthesise visual frames and binaural audios for an arbitrary receiver (camera and binaural microphone) trajectory within a static environment E. To synthesise new binaural audio a_t^* and generate a novel view image I^*, this task utilises observations O = {O_1, O_2, …, O_N}, where O_i consists of a receiver pose 𝐩̂_rc= ( p_rc, d_rc) defined as the receiver position p_rc∈ℝ^3 and direction d_rc∈ℝ^3, a sound source position p_sr∈ℝ^3, mono-source audio a_s, recorded binaural audio a_t, and an image I. The goal is to generate output binaural audio a_t^* and novel view image I^* from a new receiver pose 𝐩̂_rc^* and source audio a_s^*. This process can be formulated as (a_t^*, I^*) = f(𝐩̂_rc^*, a_s^*| O, E), where f denotes the synthesis function. Similar to existing works <cit.>, the position of the sound source is assumed to be known in the environment E. Literature commonly adopts two strategies to present the synthesis function, such as acoustic mask <cit.> and Room Impulse Response (RIR) <cit.>. In this paper, our main focus lies not in network design but in introducing the geometry prior to the input. Our design can be applied to model both synthesis functions. In Section <ref>, we present our approach to predict the acoustic mask; an alternative version of predicting the room impulse response is provided in the supplementary material. § METHOD We first introduce the acoustic-mask based audio synthesis function and provide an overview of our pipeline in Section <ref>. Then, we include novel view visual feature extraction in Section <ref>, details of our main contribution: global-local acoustic field generation in Section <ref>, and the direction-aware attention mechanism in Section <ref>. At last, we present the learning objective in Section <ref>. §.§ Overview Acoustic Mask. We adopt the acoustic mask-based synthesis function introduced in AV-NeRF <cit.> for binaural audio prediction. Specifically, the acoustic mask consists of m_m, m^l_d, m^r_d ∈ℝ^F × W, where F represents the frequency bins and W is the number of time frames. m_m captures changes in audio magnitude at the receiver position p_rc relative to the sound source position p_sr, m^l_d and m^r_d characterise the changes for left and right channels of the binaural audio. Given the Short-Time Fourier Transform (STFT) of the input audio clip a^*_s, defined as s^*_s = STFT(a^*_s) and predicted acoustic masks m_m, m^l_d, m^r_d, we can synthesise the changed magnitude of the binaural audio as s_m^* = s^*_s ⊙ m_m, s_l^* = s_m^* + s_m^* ⊙ m^l_d, s_r^* = s_m^* + s_m^* ⊙ m^r_d where ⊙ denotes element-wise multiplication operation, s_l^* and s_r^* represent the magnitude of the synthesised left and right channel of the audio, respectively. Finally, we can obtain the binaural audio as a_t^* = [ISTFT( s_l^*), ISTFT( s_r^*)] where ISTFT denotes the inverse STFT, s_l^* and s_r^* are for left and right channel, respectively. An overview of our work is shown in Figure <ref>. Our framework consists of a NeRF for geometry and novel view synthesis to extract visual and local geometric features, then builds the global acoustic field and local acoustic field for obtaining the audio feature for masks prediction. We provide details below. §.§ Novel-view Visual Feature Extraction Similar to AV-NeRF <cit.>, we learn a NeRF from the input image sequence. In particular, we adopt SDFStudio <cit.> which parameterised the radiance field by a signed distance function (SDF), enabling us to obtain scene geometry of better quality than AV-NeRF <cit.> and the occlusions in the scene. We can render a single image and depth map at the receiver location and the novel view, capturing the visual information of the novel view. We extract the feature F_vis from the rendered image and depth using a simple encoder locally which is then used for mask prediction. §.§ Global and Local Acoustic Field Global Acoustic Field describes sound waves radiating from a central sound source, considering (i) distance-aware energy attenuation and (ii) absorption by occlusion. To generate these waves, we place the sound source at the centre of a sphere and uniformly sample K points on the sphere's surface using the Fibonacci Sphere sampling <cit.>. We then emit rays from the centre of the sphere through these points to obtain K rays. After that, we uniformly sample N points along each ray to have sampled points p_i, where i ∈{1, 2,…, N}. Each point p_i corresponds to sound energy E_i at that location. We refer to the sound energy distribution of sampled points generated by the sound source as the global acoustic field. Distance-aware. The sound energy at each sampled point along the ray decreases with increasing distance. The room acoustic rendering proposed by Siltanen et al. <cit.> analyses the time-dependent sound transport in a path tracing framework. It accounts for energy absorption and time delay due to propagation media and distance, introducing M( p_rc, p_sr, t) to quantify energy attenuation. We define d( p_rc, p_sr)= p_rc - p_sr_2. The process can be defined as M( p_rc, p_sr, t) = e^-σ d( p_rc, p_sr)δ( t - d( p_rc, p_sr)/c), where c is the speed of sound, σ is the absorption factor, and δ(t) is the Dirac delta function. We reformulated the equation by ignoring the time-delay component for now. The distance-aware part of sound energy absorption M( p_i, p_sr) at point p_i can be written as M( p_i, p_sr) = e^-σ d( p_i, p_sr). Figure <ref>.A describes the variation of M with increasing distance between p_i and p_sr. Occlusion-aware. In neural volume rendering <cit.>, visual transmittance handles occlusion. When rendering the colour of the pixel from the sampled points along the ray, points closer to the camera with high density α contribute more to the colour than farther ones. Drawing on this principle, we adapt this technique to address occlusion in sound propagation. Despite the fundamental differences between sound, a mechanical wave that can penetrate obstacles and light, an electromagnetic wave that cannot, our approach innovates upon the established methodology to enhance sound propagation modelling. Given the trained NeRF, it allows us to determine the density α_i of each point p_i in the 3D space. In order to allow sound waves to penetrate obstacles, we propose acoustic transmittance T - we multiply visual transmittance by the attenuation coefficient γ, which depends on the scene geometry. Combined with the distance-aware energy attenuation, the final sound energy E_i at each point p_i can be defined as: α_i = α_i ×γ, T_i = ∏_j=1^i(1 - α_j), E_i = M( p_i, p_sr) × T_i. Figure <ref>.B depicts sound waves from a source traversing seven surfaces along a target ray. Figure <ref>.A quantitatively captures sound energy, accounting for both propagation distance and obstructions. Figure <ref>.CD demonstrates the efficacy of our method in managing occlusions in sound propagation. Local Acoustic Field depicts the distribution of sound energy around the receiver. Inspired by the design of spherical microphone arrays <cit.>, we generate a Fibonacci Sphere around the receiver to collect the sound energy in the global acoustic field. Specifically, a Fibonacci Sphere centre is the centre of the local coordinate system with G points on the surface. Rays are emitted from the centre, passing through the sphere's surface, in the direction d_Fib∈ℝ^3 × G. H points are uniformly sampled along these rays within the range r_min to r_max. The sampled points are then transformed from the local to the world coordinate system by the receiver's pose. We use the nearest interpolation to extract sound energy for each sampled point from the global acoustic field. The local acoustic field is a G × H feature vector (with G rays, each having H sampled points containing interpolated wave energy). We compute a distance-weighted sum of wave energy along each ray, resulting in F'_ac∈ℝ^G, which is then input into the acoustic encoder to obtain F_ac. As shown in Figure <ref>, combining the feature F_ac predicted from the local acoustic field, F_vis and feature of the receiver location p_rc as F_agg, we can estimate the sound attenuation mask m_m at the receiver location from F_agg. §.§ Direction-aware Attention Mechanism Given the Local Acoustic Field, we propose a direction-aware attention mechanism to distinguish the left and right channel sound features to generate the binaural audio. Specifically, we calculate the similarity between the left or right channel directions d_l, d_r∈ℝ^3 with d_Fib to obtain the attention Atten_l, Atten_r∈ℝ^G for each channel. This attention is then combined with the local acoustic field to obtain binaural features. The process can be defined as: Atten_l = d_l^T d_Fib, F'_l = Atten_l^T ⊙ F'_ac, Atten_r = d_r^T d_Fib, F'_r = Atten_r^T ⊙ F'_ac, where ⊙ denotes element-wise multiplication. After further transformation of F'_l and F'_r to F_l and F_r, respectively, to align their dimension with F_agg, we combine F_agg with F_l or F_r separately to estimate m^l_d or m^r_d. Figure <ref> compares the local acoustic fields and binaural channel features for two receivers at different positions. The patterns of the two different receivers indicate their distinct directions, and the colour bar shows the differing sound intensities of their left and right channels. The directions and sound intensities of binaural channels are considered comprehensively in scene occlusion-aware sound propagation. §.§ Learning Objective Acoustic Mask. In the RWAVS dataset, we predict the m_m and m^l_d, m^r_d and obtain the predicted magnitudes s^*_m, s^*_l, s^*_r via Equation <ref>. Following the approach in <cit.>, we optimise the network with the following loss function: ℒ_A = ‖ s_m - s^*_m ‖^2 + ‖ s_l - s^*_l ‖^2 + ‖ s_r - s^*_r ‖^2, where s_m, s_l and s_r denote the ground-truth magnitudes, corresponding to the mixture, left, and right channels, respectively. The mixture s_m is defined as the average of s_l and s_r. The first term of ℒ_A encourages the network to predict the masks reflecting spatial effects caused by distance and geometry-occlusion. The second and third terms encourage the network to generate masks that capture differences between the binaural channels. § EXPERIMENTS §.§ Datasets, Baselines & Metrics Datasets. We evaluate our method on the real-world RWAVS and the synthetic SoundSpaces datasets. RWAVS dataset. The Real-World Audio-Visual Scene (RWAVS) dataset is collected by the authors of AV-NeRF <cit.> from diverse real-world scenarios, divided into four categories: office, house, apartment, and outdoor environments. Specifically, the indoor scenes have single-room layouts in the office category, while multi-room layouts are present in the house and apartment categories. To capture various acoustic and visual signals along different camera trajectories, the data collector moved randomly through the environment while holding the recording device. For each scene, RWAVS contains multimodal data including source audio, collected high-quality binaural audio, video, and camera poses, ranging from 10 to 25 minutes. Camera positions are densely distributed throughout the scene, and camera directions are sufficiently diverse. For a fair comparison, we maintain the same training/test split as <cit.>, which contains 9,850 samples for training and 2,469 samples for testing, respectively. SoundSpaces dataset. SoundSpaces <cit.> is a synthetic dataset simulated based on hybrid sound propagation methods <cit.> that simulates fine-grained acoustic properties by simultaneously considering the effects of room geometry and surface materials on sound propagation in a 3D environment. Following the approach in <cit.>, we validate our method on the same six representative indoor scenes, including two single rooms with rectangular walls, two single rooms with non-rectangular walls, and two multi-room layouts. For each scene, SoundSpaces provides binaural impulse responses for extensive emitter and receiver pairs sampled within the room at a fixed height from four different head orientations (0, 90, 180, and 270). To validate the effectiveness of our approach on this dataset, we modify our model to estimate binaural impulse responses instead of acoustic masks while keeping all other components unchanged. We keep the same training/test split as previous works <cit.> by using 90% data for training and 10% data for testing. Baselines. We compare our approach with state-of-the-art methods <cit.> that also learn a neural acoustic field with implicit representation. Among these methods, NAF <cit.> learns audio signals with a trainable local feature grid while INRAS <cit.> disentangles scene-dependent features from audio signals and reuses them for all emitter-listener pairs. ViGAS <cit.> and AV-NeRF <cit.> are multimodal approaches that leverage the visual feature of a single image for audio generation. For the RWAVS dataset, we include three additional baselines for reference: Mono-Mono, Mono-Energy, and Stereo-Energy. Mono-Mono simply repeats the source audio twice to achieve a binaural effect. Mono-Energy scales the energy of the source audio to match the average energy of the ground truth target audio then duplicates it to obtain a binaural audio. Stereo-Energy first duplicates the source audio and then scales the two channels separately to match the energy of each channel of the ground truth target audio. For the SoundSpaces dataset, we also compare our model with the linear and nearest neighbour interpolation results of two widely used audio coding methods: Advanced Audio Coding (AAC) <cit.> and Xiph Opus <cit.>. All these methods are evaluated with the same train/test split for each dataset. Metrics. Following AV-NeRF <cit.>, we utilise the magnitude distance (MAG) <cit.> and envelope distance (ENV) <cit.> as evaluation metrics on the RWAVS dataset. MAG measures the audio quality of the generated sound in the time-frequency domain after applying the Short-Time Fourier Transform (STFT), while ENV measures it in the time domain. On the SoundSpaces dataset, we follow NAF <cit.> to evaluate our method with a) the spectral loss, which is the magnitude distance between the generated and the ground truth log-spectrogram, and b) the T60 error, which describes the percentage error between the time it takes for the synthesised RIR to decay by 60 dB in the time domain with the ground truth T60 reverberation time. For all of these metrics, lower is better. The detailed definitions of these metrics are provided in the supplementary materials. §.§ Results & Ablation study We present the quantitative experimental results on the RWAVS dataset in Table <ref>. Our model consistently outperforms all baselines across all environments. Specifically, we achieve an overall 6.2% and 22.9% reduction in the MAG metric compared to previous state-of-the-art audio-visual methods AV-NeRF <cit.> and ViGAS <cit.>, respectively. This demonstrates that our approach can extract more comprehensive environmental information from visual inputs and efficiently integrate it into the neural acoustic field modelling. Table <ref> provides the quantitative results on the Soundsapces dataset. Compared to all previous methods, our approach achieves the best performance in both spectral loss (Spec) and T60 percentage error for all scenes, especially for large indoor scenes with multi-room layouts. In particular, we obtain an overall 20.7% reduction in T60 error across all scenes, and an average 26.5% reduction on multi-room scenes Large 1 and Large 2. This greater improvement in multi-room scenes further validates the effectiveness of our global-local acoustic field in modelling sound propagation in complex scene layouts with occlusions. An example of a visual comparison of rendered audio is shown in Figure <ref>. More implementation details are in the supplementary materials. Please refer to the https://huiyu-gao.github.io/SOAF/project page for visualization. Ablation Study of Proposed Components. We conducted an ablation study based on AV-NeRF on the RWAVS dataset. In Table <ref>, “w/o geo, dir” uses AV-NeRF's default input (visual features, sound source, receiver location, and their relative direction). “w/o dir” only adds our global-local acoustic field (geo), showing improvements in all scenarios, especially in occluded multi-room settings, demonstrating the effectiveness of our spatial geometric prior. “Ours - w/o dir” uses AV-NeRF's relative direction information and geo, while “Ours - full” incorporates all proposed modules, achieving the best performance by considering both the relative direction and occlusions. Robustness to Scene Reconstruction Quality. Figure <ref> illustrates the impact of scene geometry on audio synthesis. Even with worsening reconstructions (Acc from 0.01 to 6.5), performance remains stable, indicating robustness to geometric errors. Yet, significant degradation (Acc from 6.5 to 14), as shown on the Figure <ref>.B.c, reveals that inaccuracies in fundamental structures like walls can affect wave propagation, with incorrect layouts resulting in a noisy acoustic field. § DISCUSSION Limitation and Future work. (i) This method still relies on knowing the sound source position. (ii) Our proposed method does not model sound reflection and reverberation. (iii) Exploring sound sources with specific directional attributes could be a focus for future research. Societal impact. Realistic reconstructions of audio-visual scenes enhance immersive experience in AR/VR games, but it may encourage players to spend more time on games. In addition, the spatial audio generated by our method might be susceptible to misuse for undesirable spoofing events. Conclusion. In this work, we introduced a global-local acoustic field that significantly enhances audio-visual scene synthesis by incorporating room geometry and occlusions into sound propagation modelling. The proposed direction-aware attention mechanism delivers more accurate and realistic binaural audio in complex environments. Tested on both the RWAVS and SoundSpaces datasets, our approach enhances immersive experiences in augmented and virtual reality applications and shows promise for further advancements across various settings. § PREDICT ROOM IMPULSE RESPONSE AND LEARNING OBJECTIVE Given an accurate model of the impulse-response for the left and right channel, 𝐫𝐢𝐫_l( p_sr, p_rc^*) and 𝐫𝐢𝐫_r( p_sr, p_rc^*), we may model audio reverberation of a_s emitted at p_sr by computing the response, namely, the binaural audio, (a_tl, a_tr), at the receiver location p_rc^* by querying the continuous field and using temporal convolution: a_tl^* = a_s ⊗ rir_l( p_sr, p_rc^*), a_tr^* = a_s ⊗ rir_r( p_sr, p_rc^*) The synthesis dataset Soundspace <cit.>, along with concurrent real-world dataset <cit.> are proposed to collect the Room Impulse Response for the scene. Learning Objective. In the SoundSpace dataset, we predict Room Impulse Response across spectrogram time and frequency coordinates. We optimize the network via the spectral loss: ℒ_R = Spec(𝐦_prd, 𝐦_gt) 𝐦_gt = STFT( rir( p_sr, p_rc^*)), 𝐦_prd = STFT( rir( p_sr, p_rc^*)), where Spec(𝐦_prd, 𝐦_gt) is the spectral loss function <ref>, rir( p_sr, p_rc^*) is the ground truth RIR, r̂îr̂( p_sr, p_rc^*) is the predicted RIR and p_sr, p_rc^* are sound source position and synthesized receiver position respectively. § IMPLEMENTATION DETAILS Our model is implemented using PyTorch <cit.> and optimized using the Adam <cit.> optimizer, with hyperparameters β_1 = 0.9 and β_2 = 0.999. The initial learning rate is set to 5 × 10^-4 and is exponentially decreased to 5 × 10^-6. The training process spans 200 epochs, with a batch size of 32. In the RWAVS dataset, the absorption factor is σ = 2.5 and the attenuation coefficient is γ = 0.01. In the SoundSpace dataset, these values are σ = 0.166 and γ = 0.66. When building the Fibonacci Sphere, we set G=1024 (uniformly generating 1024 rays around the sphere center) and H=10 (uniformly sampling 10 points along each ray). In the RWAVS dataset, r_min = 0.001 and r_max = 0.03 and in the SoundSpace dataset, r_min = 0.1 and r_max = 0.8. The parameters, including the absorption factor σ, attenuation coefficient γ, and the characteristics of the Fibonacci Sphere, are determined based on the range and specific features of the scene. All experiments are conducted on an RTX 4090 GPU. For the RWAVS dataset, each scene is trained for 20 minutes, whereas for the SoundSpace dataset, each scene is trained for 25 hours. § EVALUATION METRICS RWAVS dataset. We follow AV-NeRF <cit.> to select the magnitude distance (MAG) <cit.> and envelope distance (ENV) <cit.> as evaluation metrics on the RWAVS dataset. The MAG metric is defined as MAG(𝐦_prd, 𝐦_gt) = ||𝐦_prd - 𝐦_gt||^2 , where 𝐦_prd and 𝐦_gt are the predicted and ground truth magnitude after applying the Short-Time Fourier Transform (STFT), respectively. The ENV metric is defined as ENV(a_prd, a_gt) = ||hilbert(a_prd) - hilbert(a_gt)||^2 , where a_prd is the predicted audio, a_gt is the ground truth audio, and hilbert is the Hilbert transformation function <cit.>. SoundSpaces dataset. We follow NAF <cit.> to use the spectral loss and T60 percentage error as evaluation metrics on the SoundSpaces dataset. The spectral loss measures the log-magnitude distance by Spec(𝐦_prd, 𝐦_gt) = |log(𝐦_prd) - log(𝐦_gt)| , where 𝐦_prd and 𝐦_gt are the predicted and ground truth magnitude after applying STFT, respectively. The T60 reverberation time is the time it takes for a sound to decay by 60 dB. We calculate the T60 percentage error by T60(a_prd, a_gt) = |T60(a_prd) - T60(a_gt)|/T60(a_gt), where a_prd is the predicted impulse response and a_gt is the ground truth impulse response. § NEURIPS PAPER CHECKLIST * Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: Justification: Yes, we accurately claim our paper's contributions and scope in the abstract and introduction section. These claims are demonstrated in our methodology and experiments sections. Guidelines: * The answer NA means that the abstract and introduction do not include the claims made in the paper. * The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. * The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. * It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. * Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: Justification: We introduce the limitations of this work in the discussion section. Guidelines: * The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. * The authors are encouraged to create a separate "Limitations" section in their paper. * The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. * The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. * The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. * The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. * If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. * While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. * Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: Justification: This paper does not include theoretical results and only includes experimental results. Guidelines: * The answer NA means that the paper does not include theoretical results. * All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. * All assumptions should be clearly stated or referenced in the statement of any theorems. * The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. * Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. * Theorems and Lemmas that the proof relies upon should be properly referenced. * Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: Justification: We include all the information needed to reproduce the main experimental results in the method section, experiment section, and supplementary material. Guidelines: * The answer NA means that the paper does not include experiments. * If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. * If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. * Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. * While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example * If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. * If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. * If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). * We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. * Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: Justification: We will release all data and code after the paper is accepted. Guidelines: * The answer NA means that paper does not include experiments requiring code. * Please see the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). * The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. * The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. * At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). * Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. * Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: Justification: We include the experimental details in the main paper and the supplementary material. Guidelines: * The answer NA means that the paper does not include experiments. * The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. * The full details can be provided either with the code, in appendix, or as supplemental material. * Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: Justification: This paper does not report error bars. Guidelines: * The answer NA means that the paper does not include experiments. * The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. * The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). * The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) * The assumptions made should be given (e.g., Normally distributed errors). * It should be clear whether the error bar is the standard deviation or the standard error of the mean. * It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. * For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). * If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. * Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: Justification: We include the computer resources used for this work in the supplementary material. Guidelines: * The answer NA means that the paper does not include experiments. * The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. * The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. * The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). * Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics <https://neurips.cc/public/EthicsGuidelines>? Answer: Justification: This work conforms with the NeurIPS Code of Ethics. Guidelines: * The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. * If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. * The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). * Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: Justification: We present the potential positive and negative societal impacts in the discussion section. Guidelines: * The answer NA means that there is no societal impact of the work performed. * If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. * Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. * The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. * The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. * If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). * Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: Justification: This paper does not have a high risk for misuse. Guidelines: * The answer NA means that the paper poses no such risks. * Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. * Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. * We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. * Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: Justification: We have cited the original owners of all assets that used in this paper properly. Guidelines: * The answer NA means that the paper does not use existing assets. * The authors should cite the original paper that produced the code package or dataset. * The authors should state which version of the asset is used and, if possible, include a URL. * The name of the license (e.g., CC-BY 4.0) should be included for each asset. * For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. * If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, <paperswithcode.com/datasets> has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. * For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. * If this information is not available online, the authors are encouraged to reach out to the asset's creators. * New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: Justification: This paper does not release new assets. Guidelines: * The answer NA means that the paper does not release new assets. * Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. * The paper should discuss whether and how consent was obtained from people whose asset is used. * At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. * Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: Justification: This paper does not involve crowdsourcing and human subjects. Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. * According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. * Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: Justification: This paper does not involve crowdsourcing and human subjects. Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. * We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. * For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
http://arxiv.org/abs/2407.02863v1
20240703072221
Fast maneuver recovery from aerial observation: trajectory clustering and outliers rejection
[ "Nelson de Moura", "Augustin Gervreau-Mercier", "Fernando Garrido", "Fawzi Nashashibi" ]
cs.AI
[ "cs.AI", "cs.CV", "cs.RO" ]
Light fermion masses in partially deconstructed models Maki Takeuchi July 8, 2024 ====================================================== empty empty § ABSTRACT The implementation of road user models that realistically reproduce a credible behavior in a multi-agent simulation is still an open problem. A data-driven approach is proposed here to infer behaviors that may exist in real situation to obtain different types of trajectories from a large set of observations. The data, and its classification, could then be used to train models capable to extrapolate such behavior. Cars and two different types of Vulnerable Road Users (VRU) will be considered by the trajectory clustering methods proposed: pedestrians and cyclists. The results reported here evaluate methods to extract well-defined trajectory classes from raw data without the use of map information while also separating "eccentric" or incomplete trajectories from the ones that are complete and representative in any scenario. Two environments will serve as test for the methods develop, three different intersections and one roundabout. The resulting clusters of trajectories can then be used for prediction or learning tasks or discarded if it is composed by outliers. § INTRODUCTION Simulation is a indispensable tool to prove the efficacy and viability of any framework or system capable to drive an Automated Vehicle (AV) before integration with a prototype. In most cases these simulations reproduce the behaviors of other road users based on real recordings, which in this case the behavior of all road users is fixed ahead of time, or it relies on hybrid approaches, mixing real information with some a priori hypothesis, modeling and/or knowledge about the agents being represented. The main goal of this work is to produce classifications of trajectories to feed these hybrid methods with reliable and diverse set of trajectories, for different situations and different road users while being fast to execute and reliable enough to sift through outliers at input. Trajectory clustering has been a long research topic in the AV area. Many articles deal specially with the analysis of vehicle trajectory as a way to retrieve the possible trajectories in an environment, to study the traffic flow intersections <cit.>, to discover possible longitudinal behaviors of vehicles <cit.>, to execute some learning task <cit.> or even to examine the scenarios that might happen during driving <cit.>. Thus, the goal of this paper is to propose a fast and robust way to recover sets of trajectories for vehicles and vulnerable road users (VRU), delivering sets of trajectory samples as input to all the aforementioned tasks in a simple manner. Approaches based on clustering with dynamic time warping (DTW) are the norm in the literature. In <cit.> three different threshold comparisons were made using the DTW distance metric to establish a similarity relationship between scenarios involving multiple road users. K-means and fuzzy c-means were used for <cit.> with longest common subsequence (LCSS) to cluster trajectories in intersections so to derive insights about the traffic flow in multiple lane cross-intersections. Changing from the urban to aerial traffic, <cit.> proposed a method to combine k-means with outlier removal based on information theory by the minimization of the holoentropy and achieving good results clustering flight trajectories. With a different motivation, <cit.> implemented a fast k-means clustering method only for vehicles and bypassing the outliers problem. And on a totally different scale <cit.> applied the same idea of trajectory clustering but on a city scale, learning an embedding to simplify the trajectory representation and then clustering the projected data to find vehicles with similar behavior. The contribution of this paper is two-fold: First, to propose a fast clustering method for maneuver retrieval from real observations that do not need any map information and that is compatible with vehicles and VRUs. Second, to adapt this model to deal with trajectories that can be considered as outliers, separating these "eccentric"[Definition on section <ref>] and/or erroneous instances without disturbing the clustering process. A preview of the results obtained can be seen in Figure <ref>. Given the simplicity of the approach, the method presented here can be a valuable addition to the plethora that already exists. All data used originate from a microscopic observation of road environments by a drone <cit.>, which retain a good amount of information about the behaviors of each road users, in comparison with <cit.> that make observations on a larger scope and well structured environment (4-lane road with signals), and of <cit.> that focus more on the interaction size with short observation periods (4s) and 10Hz of acquisition frequency (in comparison with 25Hz of the data used here). § SEPARATING TRAJECTORIES OF INTEREST Differently from the vehicle trajectories studied in <cit.>, VRU trajectories are less constraint by its environment and are also more prone to acquisition error as well (shadows, changes in direction, multiple users close by). Also, when a scenario for observation is defined some of the less intuitive trajectories become superfluous, considering the interest of keeping only those which represent behaviors that can be transposed in other scenarios. Take, for example, the trajectories displayed in Figure <ref> (white cross represents the beginning of the trajectory and black cross the end): these three different sets may represent a real-life situation, like getting out of a store and entering in a car but they are not of interest since they are scenario-specific. These types of trajectories will be qualified as eccentric from now on. The main focus is to sift through entire datasets and isolate the eccentric trajectories (like the ones in Figure <ref>) and erroneous ones (like vehicles starting their trajectory in the middle of an intersection) in specific clusters and trajectories of interest Figure <ref> in their own clusters. The final result can then be visually inspected to discard some and retain others. Both figures were produced using the data available in the InD Dataset <cit.>. Given the difficulties that methods of the similar inspiration of k-means have with outliers, other clustering algorithms were considered to deal with pedestrian and cyclist trajectories. All of them using a pre-calculated dissimilarity matrix (Equation (<ref>)) where each element is the result of the Dynamic Time Warping (DTW, subsection <ref>) distance measure of two trajectories. 𝒟_DTW = [ d_0,0^DTW = 0 ⋯ d_0, n^DTW; d_1,0^DTW ⋯ d_1, n^DTW; ⋮ ⋱ ⋮; d_n,0^DTW ⋯ d_n, n^DTW = 0 ] §.§ Dynamic time warp (DTW) DTW was first introduced in the speech processing domain as a way to compare two time series that have different phases. Even though the trajectories studied here were sampled at the same frequency, they may have different lengths and although corresponding to the same maneuver in an intersection. Consider two discrete time series, represented by (<ref>) and (<ref>), with different sizes n and m, where K = {k_0, k_1, …, k_n, …, k_m, …} represents the sampled periods: R[K] = r[k_0], r[k_1], …, r[k_n] S[K] = s[k_0], s[k_1], …, s[k_m] The goal of the DTW is to calculate the optimal sequence of pairs of point indexes, one from each time series. This is done by minimizing the euclidean distance between the points indicated by the index pair, from (r[k_0], s[k_0]) to (r[k_n], s[k_m]), using a certain set of increments to walk from the former to the latter. In the standard implementation (equation <ref>) three steps are tested: +1 on the index 1, +1 on the index 2 or +1 in both. Equation (<ref>) defines the DTW from R and S as the calculated sum of distances, which are determined by the recursion in Equation (<ref>), for 0 ≤ i ≤ k_n and 0 ≤ j ≤ k_m. DTW(R,S) = γ(k_n,k_m) γ(i,j) = d(r[k_i], s[k_j]) + min[γ(i-1,j), γ(i,j-1), γ(i-1,j-1)] There are multiple DTW variants, some changing the walk used in the recursion (<ref>) (constraint DTW <cit.>) or adopting restriction on the elements to be considered by Equation (<ref>) (Sakoe-Chiba band <cit.>; Itakura parallelogram <cit.>). Usually, when the Euclidean metric is used, the centroid of a set of series can be calculated simply by summing all the elements and dividing by the number of series in the set. One can do the same with series of different lengths using the Dynamic Barycenter Averaging (DBA) <cit.>, however the result of this algorithm usually is a non-differentiable array always with the same size of the biggest array in the set. When a time-series is necessary to represent the ensemble of a cluster, the medoid will be chosen, according to Equation (<ref>), where 𝒳 is the set being considered and d in our case is the DTW distance. x_med = _x∈𝒳∑_i=0^Nd_DTW(x, x_i) §.§ Clustering methods Three main methods were used to cluster the trajectories using the DTW distance: hierarchical clustering, partition around medoids (or k-medoids) and dissimilarity matrix clustering. §.§.§ Hierarchical clustering The hierarchical clustering used was based on a agglomerative processes, i.e., it starts with each sample being a cluster and at each step it merges the two most similar clusters, to then continue this process until the desired number of clusters is achieved <cit.>. The metric used to measure the similarity of two clusters, thus to decide with clusters should be merged at a given iteration, was the average linkage, Equation (<ref>): d_𝒞_i, 𝒞_j = 1/N· M∑_x_i∈𝒞_i^N∑_x_j∈𝒞_j^Md(x_i, x_j) Where 𝒞_i and 𝒞_j are two clusters being evaluated and N and M are the number of elements inside each respective cluster. The distance measure used is the DTW (from (<ref>)). §.§.§ Partition around medoids (or k-medoids) It uses the same sequence of calculation - allocation of elements in cluster then center recalculation - from the k-means algorithm but using the medoid element as the cluster center (equation <ref>), not a synthetic average of elements <cit.>. Such adaptation is common in cases where it is difficult to calculate the average of elements being clustered, as for example when these do not have the same length. §.§.§ Dissimilarity matrix clustering The algorithm was proposed to accelerate the clustering of vehicle's trajectories in <cit.>, in comparison with the k-means algorithm. To simplify the k-means calculation, it is applied at each row of the dissimilarity matrix for the entire dataset to look for the smallest cluster center of all rows (which will be the one that has the smallest sum of distances to the element represented by the row). Then, all the elements assigned to this minimal cluster are removed from the matrix and the process is executed again, until the desired number of clusters is achieved. If there are any left, then they are assigned to the cluster in which it has the smallest distance to its medoid. § METHODS EVALUATED Independently from the clustering method used from the proposed in the previous section, some errors in classification might still appear, even if a higher number of clusters is used. To cluster using the DTW distance considers only the shape of the trajectory, which might mix trajectories that have small but important differences at its origin or terminus but that share an important part of its path. Hence, to correct this errors the initial and final points shall be used in a separate clustering procedure split the elements based on these points. Then, it will be necessary to check if any of the just-obtained sub-clusters should be fused back together, if they really are on the same maneuver, or even if they should be merged with other sub-clusters, to determine the final result. Algorithm <ref> shows how the entire clustering process with this post-processing operation works. The interval [nk_min, nk_max] refers to the minimal and maximal number of clusters to be evaluated. §.§ Reorganization using initial and final points Two different approaches were taken to evaluate the best solution to further divide the clusters according to their initial and final points: * Cluster both initial and final points on the same array, establishing automatically the sub-cluster groupings * Cluster the initial point, then the final point and list the grouping created comparing both results. Given that a search based on a number of clusters is already being executed, the mean-shift algorithm was used to execute these two options of post-processing, avoiding a nested search. After the merge process will happen, to fuse clusters with fairly similar characteristics, only differing from a few meters from each other while retaining its significant part, for example a left turn, a street crossing, etc. Evaluating if two sub-clusters should be merged is done using the medoid of each cluster, together with the spread of elements in each sub-cluster, Equation (<ref>). The variable m_i represents the medoid of the cluster 𝒞_i and N_i its number of trajectories. To merge one sub-cluster with another it is necessary that the distance between both medoids be smaller than the sum of spreads for the respective clusters (line <ref> of algorithm <ref>). To discard small differences, the medoid of one cluster is projected into the other (line <ref> of algorithm <ref>), so that the similarity disregard any errors in tracking or even small differences in the start or terminus of the trajectory. If this condition is true, there is another to be fulfilled: the calculated projection should be equal or higher than a certain percentage of the original trajectory (which is defined in Table <ref> for all cases examined here). spr_𝒞_i = 1/N_i∑_x_i∈𝒞_id_DTW(m_i, x_i) Equations (<ref>) and (<ref>) show how one trajectory is projected onto another. For two generic trajectories tr_a = (p_a0, p_a1, ⋯, p_a_n) and tr_b = (p_b0, p_b1, ⋯, p_b_m), two loops are executed: one for the initial point of tr_a and another its final point. Inside these loops the index j in increased from 0 (or decreased from N_b for the terminus) to find the interval of points where p_a_0 (or p_a_n) is perpendicularly projected. The cutting point is obtained when λ_b_j∈ [0,1], meaning that the projection of p_a0 (or p_an), is between points j and j+1 from tr_b. If no cutting point is detected then all the trajectory is used in the comparison. v_p_a_0→ p_b_j = p_b_j - p_a_0 λ_b_j = [v_p_a_0→ p_b_j] ·v̂_p_b_j, j+1/v̂_p_b_j, j+1 §.§ Evaluating cluster partition quality Finally, it is necessary to evaluate the clusters according to the similarity of elements inside each cluster and in other clusters as well. Three metrics will be used for it: the Davies-Bouldin index the Silhouette score and the spread on cluster, proposed here. The DB index will be slightly modified to allow a better representation of the distribution quality for a certain number of clusters, while the latter will be used as it was defined in <cit.>. §.§.§ Davies-Bouldin index (DB) The Davies-Bouldin index (DB) is originally defined as the average of the maximal value of R_ij, as if defined in Equations (<ref>) and (<ref>). Equation (<ref>) is used to calculate the spread s_i. Since it is the maximal value of R_ij that is used to calculate the final score, it has creates a dependency to the number of clusters, i.e. the decrease on the score is connected to a higher number of cluster and not necessarily a better distribution. R_ij = (s_i+s_j)/d_ij DB_n_c = 1/n_c∑_i=0^n_c[max_j=1,…,n_c, i≠ jR_ij] Thus, a small modification was done, to use the average of R_ij, not its maximal value, as it can be seen in Equation (<ref>). This offers less bias to the number of clusters, considering always all the distribution of elements being evaluated. The n_c - 1 discounts the distance of the medoid to itself, which is zero. DB_n_c = 1/n_c1/n_c - 1·∑_i=0^n_c∑_j=0^n_cR_ij §.§.§ Silhouette score (Slh.) Another metric to evaluate the clustering quality is the silhouette score, proposed in <cit.>. Differently from the DB score, it is calculated for each element being clustered, with a(i) begin the average dissimilarity (DTW distance in this case) of element i to all other elements in its cluster. The other value necessary to calculate the silhouette is the minimum average dissimilarity between the element in question to the other clusters, Equation (<ref>). s_𝒞_i(i) = b(i) - a(i)/max(a(i), b(i)) b(i) = min_𝒞_j ≠𝒞_id_DTW(b(i), 𝒞_j) With the s(i) for each trajectory, the silhouette score for the clustering is the average score for all elements. This score is contained in [-1, 1], with a score close to 1 being excellent (distances inside cluster are much smaller than distance between clusters). As the DB score it compares a infra-cluster spread measure with inter-cluster distances, but in a individual level. As it will be seen, in some situations where many different clusters co-exist close to each other (subsection <ref>), it will not be a representative measure of cluster quality. §.§.§ Spread on cluster (Spr.) This measure is somewhat similar to Equation (<ref>) but was changed to capture the biggest difference between two members of the same cluster. In Equation (<ref>) the average of all the spreads divided by the number of members in the respective cluster define the metric. θ_𝒞 = 1/𝒞·∑_i=0^𝒞max_j, k∈𝒞_i[d_DTW(x_j, x_k)]/𝒞_i This metric will be specially important for the pedestrian case, where the silhouette score is not representative given the close proximity of multiple clusters. § RESULTS §.§ Methodology Two different sources of data will be used to test the algorithms proposed here: the inD dataset <cit.> and the roundD dataset <cit.>. Both are obtained using a unnamed aerial vehicle (UAV), the former containing four different intersections and the latter three roundabouts, all in Germany. The information about the number of trajectories per scenario and per road user can be found in table <ref>; the number of files refers to the data-batch file division for each scenario: in the inD case only the first three intersections were used for the VRU (for the vehicle case all intersection were tested) and for the roundD there are two files with two different roundabouts that are not used (only a few observations); the other scenarios are obtained from the observation of a third one (files 02 to 23). This data was split into three scenarios, given the number of trajectories (the clustering results of the three scenarios could be merged using the algorithm proposed in <cit.>). For the clustering execution all three road user trajectories from inD were used (in blue at Table <ref>), while for the rounD only the vehicles' trajectories could be used, due to the low number of observations for pedestrians and cyclists (in red at Table <ref>). Table <ref> shows the parameters used for the tests that will be presented next. The number of cluster established the interval clusters tested by the methods, while the Min. trace refers to the minimal percentage that the projected medoid must have to be merged with another cluster (subsection <ref>, algorithm <ref>). The choice for this specific datasets was motivated because both of them are captured by drone, not by an automated vehicle in the environment, which could modify the behaviors observed and also because the metadata present in the dataset allows the trajectories to be plotted in a realistic background image. But the method presented here could be used in any other ensemble of trajectories. Also, all the data is used as-is, no trajectory in the dataset is discarded beforehand only a normalization is done on each dimension of the trajectory before calculating the dissimilarity matrix (Equation <ref>). All the methods were implemented in Python. §.§ Pedestrians There are three intersection scenarios for pedestrians, all in the inD dataset. The most important problem with pedestrians is the high variability of maneuvers, because it have an almost constraint-free environment to evolve and also due to detection and tracking errors during the dataset acquisition and post-processing. It is a real challenge for the clustering process to treat all these problems and produce a compact cluster set. The results for scenarios 0 and 2 can be seen in table <ref> and table <ref>. The abbreviation Agglo refers to the pure agglomerative clustering, A2MS to the agglomerative followed by two mean-shits, on the initial and final points separately and A1MS to one mean-shift on the initial and final points on the same array. For all tables, <ref> through <ref>, the time indicated is the average clustering time per cluster calculated. In the column best n_k the first value is the nominal value of the number of clusters used, and in parenthesis it is the final number of clusters, and in the last column the percentage of the original trajectories present in the final classification (in parenthesis the number of rejected trajectories). Both scenarios are two extremes for the clustering process: one has few trajectories and not many options for a pedestrian to evolve in the environment and the other has much more of both. It is exactly of possible destinations and the possibility to use the same space in both directions that make the silhouette measure to fail when choosing the best method. Since the number of optimal clusters for each method is different, the spread on cluster, defined in <ref> is the best choice of criteria to select the best method overall and in the current case it indicates that the agglomerative clustering with two mean-shift applications (A2MS) is the best option for the scenario 0 and 2 (for the 1 as well, for space limitations it will not be shown here). For both scenarios the same observation can be made: the A2MS method has the lowest DB score and spread while the pure agglomerative method has the biggest silhouette score. This is exactly because the latter method some clusters that should be separated end up together while for the former one they usually can be separated by the second clustering. In both cases, clusters with a single element are discarded from the final distribution (and not accounted for during the calculation of the scores presented in each table). The exactly same observations can be made for the scenario 2[For lack of time to process the data, the results for the PAM and dissi. method are not available at submission time; it does not change the final conclusion. At the final submission the values will be updated.]. Figure <ref> shows an example of the effect of clustering with initial and final points to then merge the most similar trajectories later can have. In Figure <ref> the result of the pure agglomerative cluster separated cluster 2 can be seen, where it mix an outlier (the white cross in the middle of the trajectory). The A2MS method (A1MS resulted in the same result) not only was able to separate the outliers (figure <ref>) but also merged another pertinent cluster with it. Differences between pure agglomerative against both mean-shift post-processing one are clearly visible, but in scenario 2 the results are more similar. This is probably due to the high number of samples, with helped the pure agglomerative method to sift through the outliers, but as it can be seen in Figure <ref>, not enough; it can also be seen the efficacy of the A2MS method to remove mixed clusters. Concerning the comparison with the PAM and dissimilarity methods, one can see that they are inefficient in both fronts being evaluated here: take more time to calculate and do not produce tight clusters, specially because of the outliers present in the scene. In <cit.>, these methods were used to discover the different maneuvers of vehicles, but it must be highlighted that car's behaviors are much more constrained than pedestrians (and thus prone to outliers) and that the few outliers observed were removed before execution. §.§ Cyclists The cyclist data was acquired at the same intersections than the pedestrians. From the set of trajectories given by the dataset the approximate cyclist behavior can be considered as somewhat between cars and pedestrians, with a constrained movement, but still able to access multiple parts of the road environment. For scenario 1 results (Table <ref>) the main distinction that can be made from the pure agglo. and its modification is the opposite of what was observed with pedestrians. Some of the trajectories were split between multiple clusters for the pure agglomerative method while for the A*MS (meaning both A1MS and A2MS) methods these clusters could be merged together, specially in scenario 2. There are other instances of this same behavior in different clusters as well. Both A*MS methods can attribute their superior scores to the ability to remove outliers from clusters, as illustrated in Figure <ref>. §.§ Vehicles For vehicles the volume of data increases, with the addition of inD dataset scenario 3 and the entire rounD dataset. Since the movements for vehicles are very constrained there are almost none eccentric behavior, hence the goal here is to eliminate all the erroneous samples; for example, trajectories that end at the middle of the intersection. In some cases this was possible, notably on scenarios 1 and 2 for the inD dataset, however, in scenario 0 one maneuver got separated as the result of the mean-shift and merge mechanism for the A2MS method (Figure <ref>). Beyond that all other maneuvers for 0 were correctly determined. For scenario 1 the agglo., A2MS and A1MS were spot on, with the sole difference that two clusters detected by the agglo and the A1MS are actually outliers and were rejected by A2MS before they formed clusters. As for the PAM and dissi methods, they split different maneuvers into different clusters: Figure <ref> represents the cluster that was divided into Figures <ref>, <ref>, <ref>, which explains huge the disparity shown in Table <ref>. Table <ref> marks is the first time that the A1MS method had a better DB index and its analogue, due to the splitting of a curve maneuver that contained many samples. This is why the DB score is not used to define the best method, even when the number of cores is the same: there is some situations to split a cluster that should be an unit might be beneficial because of the spread (Equation (<ref>)) calculation. Besides that, again the pure agglomerative cluster is not capable to split maneuvers that share most of their length (Figures <ref> and <ref>) For scenario 3 the A1MS actually is the better option, but there is only a difference of two samples classified differently from this case. Since the trajectories for the rounD dataset are fairly different in length and direction the DTW distance measure is able to really account trajectories from different maneuvers, as it can be seen in table <ref>. But there is something that are not as salient in the short turns in the intersection from inD dataset but is in this case. Given the size of the roundabout the position in which the vehicles execute the trajectory becomes a discriminating parameter, i.e. the clusters also accounted if they are on the inside or outside (Figures <ref> and <ref> comparing with <ref>) together with the lane in which the vehicle ends or starts its trajectory. This difference mostly impacted the merge step given that now there is a lateral distance through the curve that is bigger than the spread over both clusters. If this division is appropriate or not is in the eye of the beholder. As a general comment, for the current use-case, the method A2MS proved vastly better than any other, which was made clear by the spread on cluster score defined here. It captures the tightness of each cluster much better than the DB index, which ends up being translated as clusters with few to none outliers in their midst. § CONCLUSION A new method to cluster trajectories, A2MS, together with a metric defined for the trajectory clustering case, the spread on cluster, were proposed and tested with the datasets inD and rounD. Using an hierarchical clustering combined with DTW distance measure and a as cluster distribution measure, A2MS proved to be the most efficient in the tested methods to produce tight and concentrated clusters with minimal number of outliers. The immediate next step is to use this clustering method in conjunction with the longitudinal approach proposed by <cit.> to extract drivers' behaviors from real data. But more broadly the method proposed here has multiple uses, from prepare data to learning tasks for planning, decision-making or prediction to even the study of traffic flow in a predetermined zone. Ultimately, this method allows in the future to collect data to train a representation of trajectories so that comparisons could be made with trajectories from different road configurations. -12cm IEEEtran
http://arxiv.org/abs/2407.03066v1
20240703124000
Effects of Multi-Parton Interactions in Jet Quenching in Heavy-Ion Collisions
[ "Andrecia Ramnath", "Korinna Zapp" ]
hep-ph
[ "hep-ph" ]
Images andrecia.ramnath@fysik.lu.se Department of Physics, Lund University, Box 118, SE 22100 Lund, Sweden korinna.zapp@fysik.lu.se Department of Physics, Lund University, Box 118, SE 22100 Lund, Sweden § ABSTRACT We perform the first systematic study of the effects of multi-parton interactions (MPI's) in the context of jet quenching in heavy-ion collisions with the jet quenching model Jewel. We use the simple MPI model of Pythia 6, on which Jewel is based. We find negligible effects on all observables except jet–hadron and Z–hadron correlations, which show a moderate enhancement at large distances. More detailed analysis at parton level reveals that, in heavy-ion collisions, the MPI contribution to jets is suppressed by quenching effects. Effects of Multi-Parton Interactions in Jet Quenching in Heavy-Ion Collisions Korinna Zapp July 8, 2024 ============================================================================= § INTRODUCTION The substructure of quenched jets is the subject of intense research both theoretically <cit.> and experimentally <cit.>. Medium-induced radiation <cit.>, color coherence <cit.> and medium response <cit.> are expected to leave imprints on the internal structure of jets. The ultimate goal is thus to decode this information about the microscopic workings of parton–medium interactions. However, to understand the sub-structure of quenched jets is challenging both from a theoretical and an experimental perspective. In an attempt to get possibly confounding factors under control, the effect of initial-state radiation was studied in <cit.>. Here, continue that effort by investigating the effect of multi-parton interactions (MPI's) in the context of jet quenching. MPI's arise when, at high centre-of-mass energies, the probability of having more than one parton–parton scattering in a proton–(anti-)proton collision becomes sizable. Formally, this effect is beyond standard factorization theorems and therefore phenomenological modeling is needed. In Monte Carlo event generators, MPI's are simulated as secondary 2→2 partonic scatterings in QCD, as they are expected to be perturbatively hard. MPI's give rise to semi-hard hadronic activity that is largely uncorrelated with the hard scattering and is observed in the form of the underlying event. MPI's have been studied extensively in proton–proton collisions <cit.>, but have not been a main focus in heavy-ion physics. An exception is <cit.>, where it was observed that MPI activity on the Z boson side of Z+jet events can obscure signs of the diffusion wake. § MODELING JET EVOLUTION AND THE UNDERLYING EVENT IN JEWEL Jewel <cit.> relies heavily on Pythia 6.4 <cit.> for the event generation. In particular, the hard scattering matrix elements, initial state parton showers including PDF handling and hadronization are provided by Pythia 6.4. We therefore let Pythia 6.4 also generate the additional MPI scatterings. There are two versions of the MPI model in Pythia 6.4: the so-called `old model' <cit.> and the `new model' <cit.> in which generation of the ordered sequence of MPI scatterings is interleaved with the parton shower evolution. Since Jewel has its own parton shower, the interleaved model does not work with Jewel and we instead use the `old model'. While this is a simpler model than the more sophisticated one developed later, it can still inform us whether sizable effects from MPI's can be expected in the context of jet quenching studies. Here, we will summarize the main features of the model; for a more detailed discussion the reader is referred to <cit.>. The jet cross section above some minimal transverse momentum ^min is given by σ_hard = ∫_(^min)^2^s/4d σ/d ^2 d ^2 . This cross section diverges for ^min→ 0 and saturates the non-diffractive proton–proton cross section σ_nd for perurbatively high values of ^min at sufficiently high collider energies. Since σ_hard is a partonic cross section, this is interpreted as a sign for several partonic scatterings taking place in one proton–proton collision. These are postulated to be independent of each other, so that the number of parton–parton scatterings follows a Poisson distribution with mean n̅ = σ_hard/σ_nd . The scatterings are generated as a sequence with falling . In each step, the PDF's are rescaled to take into account the energy already taken out by the previous scatterings. This way the hardest scattering is guaranteed to be unmodified by subsequent scatterings. The color treatment of the MPI scatterings is simplified in order to avoid too complicated color topologies. There are different options for modeling the matter distribution inside the proton, the default being a double Gaussian with a narrow core representing the valence quarks surrounded by a broader distribution of gluons and sea quarks. The mean number of MPI scatterings is then dependent on the impact parameter, i.e. the transverse distance between the cores of the colliding protons. In practice, the mean number of scatterings at a given impact parameter is taken to be proportional to the matter overlap in such a way that, averaging over impact parameters, the original relation eq. <ref> is recovered. In the Pythia implementation of the model, MPI scatterings have no parton showers. When running with Jewel, there is an option to supplement the MPI scatterings with final-state parton showers. The partons from MPI scatterings are showered pairwise such that the recoil is transferred only between partons coming from the same scattering. For the starting scale for the parton shower, we use <cit.> Q_max = e^0.3 Δ y/2/2 , where Δ y is the rapidity difference between the two outgoing partons of an MPI scattering (for the hardest scattering the starting scale is still just the of the hard scattering). The (semi-)hard partons produced by secondary MPI scatterings interact in the dense background in the same way as the partons coming from the hardest scattering. In Jewel the small differences in the production points of the partons coming from different MPI scatterings are neglected, i.e. all partons coming out of hard scatterings are placed at the same production point. The partons then propagate in the background medium and undergo elastic scattering. If a scattering is hard enough it should have radiative corrections generated by a parton shower. In practice, a trial parton shower with starting scale given by the hardness of the scattering is started. If the first emission of the trial parton shower has a formation time that is shorter than that of the next emission from the original parton shower, the trial parton shower becomes the new parton shower and the old one is stopped. Otherwise, the trial parton shower is rejected and the old one continues. Scattering in the background medium can thus enlarge the phase space for radiation, which leads to more radiation than in vacuum. When several scatterings occur during the formation time of an emission, they are added coherently in a way that reproduces the non-Abelian Landau–Pomerantchuk–Migdal (LPM) effect <cit.>. After all scattering and radiation processes have terminated, the event is hadronized using the Lund string fragmentation model implemented in Pythia. In Jewel, hard partons scatter off quasi-particles in the medium. There is no complete simulation of the background. Instead, a background parton is generated when one is needed for a scattering with a hard parton. This background parton then takes the recoil from the scattering. Since background partons are typically softer than the hard partons, the result of a scattering is usually that the hard parton loses energy and the background parton gains energy. There is an option in Jewel to keep the recoiling background parton in the event to get an estimate of where the energy lost by hard partons is going <cit.>. This is only approximate, since the recoiling background partons do not interact in the medium themselves but simply free-stream. When the recoil partons are included in the event, their momentum prior to the scattering has to be removed from the final jets because uncorrelated background is subtracted from the jets. We here use the constituent subtraction scheme for removal of thermal momenta <cit.>. As far as we are aware, this is the first systematic study of the effect of MPI's in jet quenching. § RESULTS Z+jet and di-jet samples of 500,000 each are generated for Pb+Pb and p+p collisions at √(s_NN) = [5.02]TeV using PDF sets provided by Lhapdf 6 <cit.>. Hadron-level results are generated using the Epps16nlo nuclear PDF set <cit.> and parton-level results are generated with Ct14nlo PDF's <cit.>. In this way, nuclear PDF's are not used to isolate the effects of MPI's. All events are analyzed with Rivet <cit.> and the FastJet package <cit.>. In addition to the effect of MPI's, final-sate radiation (FSR) off the MPI's is also considered. In all plots, the red lines correspond to collisions with no MPI's. The blue lines correspond to collisions with MPI's but no final-state radiation off the MPI's. The green lines correspond to collisions with both MPI's and final-state radiation off the MPI's. In all results, medium response with constituent subtraction at event level, i.e. before jet clustering, is included. For the background medium, we use Jewel's standard simplified background model with initial temperature T_i=[590]MeV at τ_i=[0.4]fm <cit.>. §.§ Z+jet events Events in which charged hadrons produced in a parton shower from the same hard scattering as a leptonically decaying Z boson are analysed. The results are compared to measurements by CMS <cit.>. The samples are generated with 0-30 % centrality, where the largest modifications due to medium effects are expected. The analysis procedure matches as far as possible the experimental one. Specifically, jets are reconstructed using the anti-k_⊥ algorithm <cit.> and a jet radius of R=0.4 is chosen. Z bosons with invariant mass [60]GeV < M_Z < [120]GeV and transverse momentum ^Z > [30]GeV are considered. We only consider the decay of the Z to muons, where in <cit.>, the Z →μ^+μ^- trigger has cutoffs of > [12]GeV and |η| < 2.4 on one muon. Figures <ref> and <ref> show the distributions of the angular separation Δϕ := |ϕ_trk - ϕ_Z| in Pb+Pb and p+p collisions, respectively. Here, ϕ_Z and ϕ_trk are the azimuthal angles of the Z boson and of other charged tracks in the event, respectively. The distributions dN_trk,Z/dΔϕ_trk,Z are normalised by N_Z, the number of Z bosons. There is an enhancement of the distribution on the boson side due to MPI's in both Pb+Pb and p+p collisions. This is in agreement with the findings in <cit.>, where a coupled linear Boltzmann transport and hydro model is used to study the enhancement of soft hadrons in the direction of both the Z boson and the jet. Figure <ref> shows that, in the Pb+Pb case, including FSR appears to enhance the spectrum even further. These findings raise the question whether MPI contributions are visible also in other jet observables that are sensitive to soft or semi-hard particles. We thus move on to examine a selection of such quantities. Figures <ref> and <ref> show the normalized distributions 1/N_Z dN_trk,Z/dξ_⊥^trk,Z for the jet fragmentation variable ξ_⊥^trk,Z in Pb+Pb and p+p collisions, respectively. This variable is the longitudinal momentum distribution of tracks on the jet side with Δϕ_trk,Z > 7π/8 and is defined as ξ_⊥^trk,Z := ln( - |p⃗_⊥^Z|^2/p⃗_⊥^ trk·p⃗_⊥^Z). Here, p⃗_⊥^Z and p⃗_⊥^trk are the transverse momentum vectors (with respect to the beam direction) of the Z boson and charge-particle track, respectively. The results show a very slight increase at the high ξ_⊥^trk,Z (low track ) end of the distribution due to MPI's. The effect is small since particles from MPI's are distributed uniformly in azimuthal angle relative to the Z. The region Δϕ_trk,Z > 7π/8 is dominated by the fragmentation of the jet and the MPI's contribution is relatively very small. §.§ Di-jet hadron-level results The 0-10 % centrality interval is chosen for the di-jet samples. The hadron-level results shown here correspond to a jet radius of R=0.4. Figure <ref> shows the nuclear modification factor R_AA for |η| < 2.8 as measured by Atlas <cit.>. No statistically significant modification due to MPI's is observed. MPI effects partially cancel in the ratio between p+p and Pb+Pb, and the jet is dominated by particles from the hardest scattering. An observable that is much more sensitive to soft particles at the periphery of the jet is the jet mass. Figure <ref> shows the distribution for the charged-jet mass M_ch jet, as measured by Alice <cit.>. Charged jets are clustered using only charged particles. Interestingly, no modification of the jet mass distribution is found. This is also true for the other jet bins, which are not shown here. Jet–hadron correlations can be used to characterise the hadron distribution further away from the jet axis and thus in regions that are less dominated by the jet fragments. Figures <ref> and <ref> show the charged-particle track yields Y as a function of the distance Δ r = √((Δη)^2 + (Δϕ)^2) from the jet axis for Pb+Pb and p+p collisions, respectively. They are compared to measurements by CMS <cit.>, where events are selected with at least one jet with > [80]GeV. As expected, there is a slight enhancement at larger angles from the jet axis due to MPI's, in both the Pb+Pb and p+p spectra. As a last observable we show the jet fragmentation function, which characterises how the jet momentum is shared among the hadrons that make up the jet. Figures <ref> and <ref> show the jet distribution D as a function of the charged-particle transverse momentum ^ch in Pb+Pb and p+p collisions, respectively, as measured by Atlas <cit.>. It is defined as D(^ch) := 1/N_jetdN_ch(^ch)/d^ch, where N_ch is the number of charged particles associated with a jet. In p+p collisions, MPI's give rise to a very slight increase at low , but no such modification is visible in Pb+Pb collisions. As discussed in the next section, this is probably due to quenching of the MPI partons. §.§ Di-jet parton-level results To gain a better understanding of MPI contributions, we also analyse di-jet events at parton level, where individual partons can be unambiguously assigned to either the hardest scattering or an MPI (which is impossible at hadron level). For the analysis at parton level, jets are reconstructed with a radius R=0.6 to make MPI contributions more visible without going to too large radii that cannot be measured experimentally. All parton-level jets have |η| < 3 and > [100]GeV. The distributions for the fraction ^frac of the total transverse momentum of the jet ^jet that is carried by the MPI partons in Pb+Pb and p+p collisions is shown in figures <ref> and <ref>, respectively. In both cases, the distribution peaks at very small values. This means that the jet is carried almost exclusively by partons coming from the hardest scattering and MPI's do not produce additional jets with > [100]GeV. Interestingly, the MPI contribution to the jet is significantly larger in p+p than in Pb+Pb. MPI's are, by construction, softer than the hardest scattering and MPI partons thus get quenched more in heavy-ion collisions. Quenching effects distribute the energy of MPI partons broadly in phase space and make it less likely than in p+p collisions that there is enough energy in the form of MPI partons within the jet cone to give a sizable contribution to the jet . As seen in figure <ref>, this effect is more pronounced when the MPI partons have final-state parton showers that distribute the energy among more partons and amplify the quenching effect. Figures <ref> and <ref> show the jet profile ρ (r) for Pb+Pb and p+p collisions, respectively. The jet profile is the fraction of the jet's transverse momentum contained in an annulus of size δ r located at a distance r from the jet axis. It is defined as ρ (r) := 1/∑_k with Δ R_kJ∈ [r, r + δ r]^(k), where and ^(k) are the transverse momenta of the jet and particle k, respectively. The sum is taken over all particles in the event, not only over the jet constituents. Δ R_kJ := √((Δϕ_kJ)^2 + (Δη_kJ)^2) is the angular separation between particle k and the jet axis. In p+p collisions (figure <ref>) the MPI contributions show up as a small enhancement at large distances from the jet axis. In Pb+Pb collisions, however, the sample with MPI but without FSR off MPI's shows the opposite behaviour and falls below the results without MPI at large r. This is an effect of medium response and the corresponding subtraction, because this behaviour is not observed when medium response is turned off. When FSR is included for MPI's, the jet profile increases at large r and at the same time, the statistical uncertainty increases significantly. Again, the effect is not seen without medium response and also not in smaller radius jets with medium response. What is happening here is that with MPI's, FSR off MPI's and related medium response, there are so many partons distributed broadly in the event that it is very rare that individual jets can increase their significantly by incorporating many of these uncorrelated partons. The sample then contains a handful of jets that have a far too large weight for their and that dominate the distributions and inflate the statistical uncertainties. This effect is more pronounced at larger jet radii. In order to gain a more detailed look at the sub-structure of the jets, SoftDrop tagging <cit.> is used. The SoftDrop procedure is as follows. First, an anti-k_⊥ jet is re-clustered with the Cambridge/Aachen algorithm <cit.>. In an iterative procedure, the clustering is undone, thereby splitting the jet into two sub-jets. The softer of the two sub-jets is dropped at each step until a configuration is reached that satisfies z_g := min (^(1), ^(2))/^(1) + ^(2) > z_cut( Δ R_12/R)^β, where Δ R_12 is the angular separation between the two sub-jets and ^(i) are their transverse momenta. The z_g distributions for Pb+Pb and p+p collisions are shown in figures <ref> and <ref>, respectively. Figures <ref> and <ref> show the distributions for the opening angle θ_g between the two sub-jets in the SoftDrop algorithm for Pb+Pb and p+p collisions, respectively. The general observations are similar to the jet profile: in p+p collisions and Pb+Pb collisions without medium response (not shown here), there is no modification of the distributions due to MPI's. In the sample with MPI's but without FSR off MPI's there is a moderate modification that is caused by the interplay of MPI's, medium response and subtraction. The MPI + FSR sample shows a similar behaviour but has one bin with very large bin value and error bars. This is probably caused by a single jet in the sample that happened to gain a sizable amount of from uncorrelated partons. Because these distributions are normalised, the large value of this one bin pushes the other bins down. § CONCLUSIONS The effect of MPI's on various jet observables in Z+jet and di-jet events has been studied here. In many cases, the MPI's do not make a significant change in the distributions. However, in the Z+jet case, an enhancement is clearly seen in the angular separation distributions, both in the Pb+Pb and the p+p case. This is in agreement with the results found in <cit.>. Jet–hadron correlations show a small increase at large distances from the jet axis, but no sizable modification is observed in jet R_AA, jet fragmentation distributions or jet mass. At the partonic level, it is seen that quenching effects tend to suppress the MPI contribution compared to p+p collisions. Jets with larger radii have erratic behavior due to their large size. This is because they can gain a sizable amount of by sweeping up uncorrelated partons from MPI's and the corresponding parton showers and medium response, and then have a too large weight for their . This is very rare but introduces huge fluctuations that hinder the interpretation of the results. There are also indications that the interplay of MPI's, medium response and subtraction introduces artefacts. This was not observed at hadron level and would require further dedicated studies. However, given that the MPI contributions were generally found to be very small, it is questionable whether such a study is worthwhile. This is the first systematic investigation of MPI's in jet quenching to our knowledge. We have used the rather simple old Pythia 6 model for the MPI's. There is clearly room for improvements of the modeling, but as a first indication this should be sufficient to show what types of effects one can expect from MPI's. Since the conclusion of this study is that MPI effects in quenched jets are generally negligible, even in jet sub-structure and jet shape observables, investing in a better MPI model hardly seems worthwhile. This study is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 803183, collectiveQCD). spphys
http://arxiv.org/abs/2407.01922v1
20240702033948
The backscattering problem for time-dependent potentials
[ "Medet Nursultanov", "Lauri Oksanen", "Plamen Stefanov" ]
math.AP
[ "math.AP", "35P25, 35R30" ]
otsep5List of Todosstarttoctdo showonlyrefs lemmaLemma[section] propositionProposition[section] theoremTheorem[section] corollaryCorollary[section] definitionDefinition[section] remark exampleExample remarkRemark[section] ℂ ℝ ℚ ℤ ℕ Ø𝒪
http://arxiv.org/abs/2407.02058v1
20240702084341
Isoperimetry in product graphs
[ "Sahar Diskin", "Wojciech Samotij" ]
math.CO
[ "math.CO" ]
§ ABSTRACT In this short note, we establish an edge-isoperimetric inequality for arbitrary product graphs. Our inequality is sharp for subsets of many different sizes in every product graph. In particular, it implies that the 2^d-element sets with smallest edge-boundary in the hypercube are subcubes and is only marginally weaker than the Bollobás–Leader edge-isoperimetric inequalities for grids and tori. Additionally, it improves two edge-isoperimetric inequalities for products of regular graphs proved by Erde, Kang, Krivelevich, and the first author and answers two questions about edge-isoperimetry in powers of regular graphs raised in their work. Unconventional p-wave and finite-momentum superconductivity induced by altermagnetism through the formation of Bogoliubov Fermi surface Kyoung-Min Kim July 8, 2024 ======================================================================================================================================= § INTRODUCTION Given a graph G with vertex set V, a key part of the (edge-)isoperimetric problem is to determine, for every k∈ℕ, the quantity i_k(G)min{e_G(A,A^c)/|A| : A ⊆ V ∧ |A| = k}, where e_G(A, A^c) is the number of edges of G with exactly one endpoint in A. For more details about discrete isoperimetric problems, we refer the interested reader to the surveys <cit.>. In this note, we will consider the isoperimetric problem for product graphs. Instances of this problem have been studied in depth for several well-known product graphs, such as hypercubes <cit.>, Hamming graphs <cit.>, grids, and tori <cit.>. Here, we will investigate the isoperimetric problem for arbitrary product graphs. The motivation for considering this problem in such generality comes partially from the results of (and the questions posed in) the recent work <cit.>, where isoperimetric estimates played a crucial role in studying bond percolation on product graphs. Given a positive integer n and an arbitrary sequence of finite graphs G_1, …, G_n, the product graph G_1□⋯□ G_n is the graph whose vertex set is V(G_1)×…× V(G_n) and whose edges are all pairs {u,v} for which there is an index j∈n such that u_jv_j ∈ E(G_j) and u_m=v_m for all m ≠ j. In order to state our main result, we require the following definition. Given an m-vertex graph G, let ψ_G [0, log m] → [0, ∞) be the convex minorant of the function {log k : k ∈m}∋ x↦ i_e^x(G); in other words, ψ_G is the largest convex function satisfying ψ_G(log k) ≤ i_k(G) for all k∈m.[Here and throughout the paper, log denotes the natural logarithm.] Observe that ψ_G is piecewise linear and that the only points where its derivative is not continuous are of the form log k for some integer k∈m. Further, ψ_G is decreasing, as i_k(G)≥ 0=i_m(G) for all k∈m and thus the left derivative of ψ_G at log m is nonpositive. Let n be a positive integer, let G_1,…, G_n be an arbitrary sequence of finite graphs, and let G_1□⋯□ G_n. For every ∅≠ A⊆ V(), e_(A,A^c)≥ |A|·min{∑_i=1^nψ_G_i(h_i) : 0≤ h_i≤log|V(G_i)|∧∑_i=1^nh_i=log|A|}. In particular, if G_1=⋯=G_n=G, then e_(A,A^c)≥ |A|· n·ψ_G((log |A|)/n). Let us note that <Ref> gives a sharp bound for every n, every sequence G_1,…, G_n, and sets A of many different sizes. To see this, assume for simplicity that G_1=⋯=G_n=G for some graph G with m vertices, so that =G_1□⋯□ G_n G^n. Consider arbitrary integers k_1, k_2 ∈m with k_1 < k_2 such that ψ_G(log k_i) = i_k_i(G) for both i ∈2 and ψ_G is linear on [log k_1, log k_2]. Further, let A_1, A_2 ⊆ V(G) be sets witnessing |A_i| = k_i and e_G(A_i, A_i^c) = i_k_i(G) · k_i for both i ∈2. Then, for all nonnegative integers n_1 and n_2 satisfying n_1+n_2=n, the set A A_1^n_1× A_2^n_2⊆ V(G)^n satisfies e_G^n(A, A^c)/|A| = n_1 · i_k_1(G) + n_2 · i_k_2(G) = n_1 ·ψ_G(log k_1) + n_2 ·ψ_G(log k_2) = n ·ψ_G(n_1/n·log k_1 + n_2/n·log k_2) = n ·ψ_G((log |A|) / n). The above argument extends to product graphs that are not necessarily powers of a single graph. In this general case, the lower bound on e_(A, A^c) is achieved by sets A of the form A_1 ×…× A_n, where A_i ⊆ V(G_i) satisfies e_G_i(A_i, A_i^c) / |A_i|= i_|A_i|(G_i) = ψ_G_i(log |A_i|) and, further, there is a real number r such that, for each i ∈n, the left derivative of ψ_G_i at log |A_i| is at most r while the right derivative of ψ_G_i at log |A_i| is at least r (where we assume that the left derivative of ψ_i at 0 is -∞ and its right derivative at log |V(G_i)| is zero). §.§ Acknowledgement We thank Joshua Erde, Mihyun Kang, and Michael Krivelevich for their helpful comments and suggestions. §.§ Organisation In <Ref>, we present the (short) proof of <Ref>, and in <Ref>, we discuss several applications of Theorem <ref> and compare them with known results in the literature. § PROOF OF THEOREM <REF> Our argument builds on the beautiful entropy-based proof of an optimal edge-isoperimetric inequality for the hypercube presented by Boucheron, Lugosi, and Massart in <cit.>. The entropy of a discrete random variable X taking values in a countable set 𝒳 is the quantity H(X) defined by H(X) -∑_x∈𝒳(X = x) log(X = x). In particular, if 𝒳 is finite and X is uniform on 𝒳, then H(X) = log|𝒳|. Further, given random variables X and Y taking values in countable sets 𝒳 and 𝒴, respectively, we define the conditional entropy of X given Y, denoted H(X | Y), to be the average entropy of the random variable X conditioned on the outcome of Y; in other words, H(X | Y) - ∑_y ∈𝒴(Y = y) ∑_x ∈𝒳(X = x | Y = y) log(X = x | Y = y). Let n be a positive integer and suppose that = G_1 □…□ G_n for some arbitrary sequence G_1, …, G_n of finite graphs. Consider an arbitrary nonempty set A ⊆ V() = V(G_1) ×…× V(G_n) and let X = (X_1, …, X_n) be a uniformly chosen random vertex of A. For every v∈ V() and each i ∈n, denote by v_(i) the projection of v along the ith coordinate, that is, v_(i)=(v_1,…, v_i-1, v_i+1, …, v_n). Further, given an x ∈ A, let A_i(x) ⊆ V(G_i) denote the support of X_i conditioned on X_(i) = x_(i). Our first key observation is that e_(A,A^c) = ∑_x ∈ A∑_i=1^n e_G_i(A_i(x), A_i(x)^c)/|A_i(x)|≥∑_x ∈ A∑_i=1^n i_|A_i(x)|(G_i). Denoting by k_i the (random) size of |A_i(X)|, we may rewrite the above inequality as e_(A, A^c) ≥ |A| ·∑_i=1^n [i_k_i(G_i)]. By the definition of ψ_G_i and by Jensen's inequality, we have, for every i ∈n, [i_k_i(G_i)] ≥[ψ_G_i(log k_i)] ≥ψ_G_i([log k_i]). Our second key observation is that [log k_i] is precisely the conditional entropy H(X_i | X_(i)). Substituting the above inequality into (<ref>), we conclude that e_(A, A^c) ≥ |A| ·∑_i=1^n ψ_G_i(H(X_i | X_(i))). The main assertion of the theorem now follows as, for each i ∈n, the function ψ_G_i is decreasing, 0 ≤ H(X_i | X_(i)) ≤ H(X_i) ≤log |V(G_i)| for each i, and ∑_i=1^n H(X_i | X_(i)) ≤ H(X) = log |A|, by Han's inequality <cit.> (see <cit.> for a compact statement). Finally, if G_1 = … = G_n = G, then we may use the convexity of ψ_G again to deduce that, for all sequences (h_i)_i=1^n that sum to log |A|, ∑_i=1^n ψ_G(h_i) ≥ n ·ψ_G(∑_i=1^n h_i/n) = n ·ψ_G(log|A|/n). as claimed. § APPLICATIONS §.§ Hamming graphs and the hypercube Let K_m be the complete graph on m vertices, so that K_m^n is the Hamming graph H(n,m). Since i_k(K_m) = m-k ≥ (m-1) · (1 - log_m k) for all k ∈m, where the inequality follows from the convexity of x ↦log x, we have ψ_K_m(x) ≥ (m-1) · (1-x/log m) for all x∈ [0,log m]. Therefore, by <Ref>, for all nonempty A ⊆ V(H(n,m)), we have e_H(n,m)(A,A^c)≥ |A|· (m-1)(n-log_m|A|). Observe that (<ref>) is sharp whenever A induces a copy of H(t,m) for some t ∈n. In this sense, one may view it as a weak version of the edge-isoperimetric inequality for Hamming graphs due to Lindsey <cit.>.[Lindsey's inequality is the stronger statement that each initial interval in the lexicographic ordering of m^n has the smallest edge-boundary among all sets of the same size.] In particular, the case m=2, may be viewed as a weak version of the edge-isoperimetric inequality for the hypercube <cit.>. §.§ The grid Let P_m be the path with m ≥ 3 vertices, so that P_m^n is the n-dimensional m ×…× m grid. Note that i_k(P_m)=1/k for every k ∈m-1 and that i_m(P_m) = 0. For every z ∈ [0, log m), let ℓ_z be the line passing through the points (z, e^-z) and (log m, 0), that is, the line y = e^-z· (log m - x)/(log m - z). Since the points {(log k, 1/k) : k ∈m-1} lie on the graph of the convex function x ↦ e^-x and ℓ_log m -1 has the largest (that is, least negative) slope among all our lines ℓ_z, we may deduce that ψ_P_m(x) ≥ e^-x if 0 ≤ x ≤log m - 1, e/m · (log m - x) if log m - 1 ≤ x ≤log m. In fact, ψ_P_m is the piecewise linear function defined by the points (0,1), …, (log k^*, 1/k^*), and (log m,0), where k^* ∈m-1 is the index k for which ℓ_log k has the largest slope. It is not hard to see that k^*∈{⌊ m/e ⌋, ⌈ m/e⌉}, but whether it is the floor or the ceiling of m/e depends on the value of m. For example, k^*=⌊ 3/e⌋ =1 when m=3, whereas k^*=⌈ 5/e⌉=2 when m=5. With the lower bound (<ref>) in place, we can now use <Ref> to derive edge-isoperimetric inequalities for . When |A|≤ (m/e)^n, we have e_(A,A^c)≥ |A|· n · e^-(log|A|) / n = n · |A|^1-1/n and when (m/e)^n ≤ |A|≤ m^n/2, we have e_(A,A^c)≥ |A|· n ·e/m(log m-(log|A|)/n) = |A|/m· elogm^n/|A|. For comparison, Bollobás and Leader <cit.> showed that, for all A ⊆ V() with |A|≤ m^n/2, e_(A,A^c) ≥|A|/m·min{r ·(m^n/|A|)^1/r : r∈n}. Since the minimum above is achieved at r=n whenever |A| ≤ (m/e)^n, our bound matches that of Bollobás and Leader in this range. In the complementary range (m/e)^n ≤ |A| ≤ m^n/2, the ratio between the two bounds does not exceed min{⌈log x⌉· x^1/⌈log x ⌉/e log x : 2 ≤ x ≤ e^n}≤max{e^y-1/y : log 2 ≤ y ≤ 1} = 2/e log 2≤ 1.062. §.§ The torus Let C_m be the cycle with m vertices, so that C_m^n is the n-dimensional discrete torus with side length m. Since i_k(C_m) = 2i_k(P_m) for all k ∈m, we have ψ_C_m = 2ψ_P_m. Thus, <Ref> and the estimate (<ref>) yield e_(A,A^c)≥ 2n · |A|^1-1/n if |A|≤ (m/e)^n, |A|/m · 2elog(m^n/|A|) if |A| ≥ (m/e)^n. For comparison, Bollobás and Leader <cit.> showed that, for all A ⊆ V() with |A|≤ m^n/2, e_(A,A^c) ≥|A|/m·min{2r (m^n/|A|)^1/r : r∈n}, and hence, as in the case of grid graphs, our bound matches theirs whenever |A| ≤ (m/e)^n and is off by a multiplicative factor of at most 2/(elog 2) in the complementary range. §.§ Products of regular graphs For every i∈n, let G_i be a d_i-regular graph on m_i vertices, let G_1□⋯□ G_n, and note that is also regular of degree d d_1 + … + d_n. Since G_i is d_i-regular, we have i_k(G_i) ≥ d_i - k + 1 = i_k(K_d_i+1) for all k ∈d_i+1. Consequently, ψ_G_i(x)≥ d_i · (1-log_d_i+1x) for all x ∈ [0, log m_i], see (<ref>). Thus, by <Ref>, e_(A,A^c) ≥ |A|·(d-max{∑_i=1^nd_i· h_i/log(d_i+1) : 0≤ h_i≤log m_i∧∑_i=1^nh_i=log|A|}) ≥ |A|·(d-max_i ∈nd_i/log(d_i+1)·log|A|) = |A|·(d- D ·log_D+1|A|), where D max_i ∈n d_i. This substantially improves <cit.>. Assume further that each G_i is connected, so that i_k(G_i) ≥ i_k(P_m_i) for all k ∈m_i. It follows from (<ref>) that ψ_G_i(x) ≥ e/m_i· (log m_i-x) for all x ∈ [0, log m_i]. Therefore, by <Ref>, e_(A,A^c) ≥ |A|·min{∑_i=1^ne/m_i· (log m_i - h_i) : 0≤ h_i≤log m_i∧∑_i=1^n h_i=log|A|} ≥ |A|·min{∑_i=1^n e g_i/m_i : g_i ≥ 0 ∧∑_i=1^n g_i = log|V()|/|A|} = |A| ·e/M·log|V()|/|A|, where M max_i ∈n m_i. When M ≥ 3, this improves the respective lower bound on e_(A, A^c) given by <cit.> by a multiplicative factor of e(1-1/M)log M. §.§ Powers of regular graphs Let G be a connected m-vertex, d-regular graph and let G^n. For every k∈m-1, let ℓ_k be the line passing through (log k, i_k(G)) and (log m, 0), that is, the line y=i_k(G)·(log m-x)/(log m-log k). Let k^* be the smallest index k such that ℓ_k has the least negative slope among all our lines and note that, for all x∈ [0,log m], ψ_G(x)≥ i_k^*(G) ·log m-x/log m-log k^* Let y_G be the y-intercept of ℓ_k^*. Note that y_G ≤ d (as i_1(G) = d) and that y_G=d if and only if k^*=1. Further, observe that y_G = i_k^*(G) ·log m / (log m - log k^*). Hence, by <Ref>, e_(A,A^c)≥ |A|·i_k^*(G)/log m - log k^*·logm^n/|A| = |A|· y_G· (n-log_m|A|). Since (<ref>) holds with equality for all x ∈ [log k^*, log m], inequality (<ref>) is tight for sets A with many different sizes, see the construction described below the statement of <Ref>. We now address two questions posed in <cit.>. First, <cit.> asked whether there are constants c_G, C_G such that i_a()= c_G ·log(m^n/a)+C_G for all a∈m^n. In other words, <cit.> asks whether i_a() is essentially linear in log a. The construction presented below the statement <Ref> shows that the lower bound on i_a() implied by the theorem is sharp whenever log a = (n_1/n) ·log k_1 + (n_2/n) ·log k_2 for some n_1, n_2 satisfying n_1 + n_2 = n and k_1, k_2 ∈m^2 such that [log k_1, log k_2] supports one of the linear pieces of ψ_G. This fact implies that i_a() in not linear in log a whenever ψ_G itself is not linear. Since there are regular graphs G for which ψ_G has more than one linear piece (for example, when G=C_m for m≥ 5), the answer to <cit.> is negative. Further, <cit.> asked for a characterisation of m-vertex d-regular graphs G for which sets of the form B_t {u}^t× V(G)^n-t have the smallest edge-boundary among all m^n-t-element sets of vertices of , for all t ∈n. We note that this is closely related to the classical problem of finding sufficient conditions for a graph to admit a nested sequence of sets that achieve the smallest edge-boundary (among all sets of a given size), see <cit.> and references therein. Since e_(B_t, B_t^c) = |B_t| · t · d = |B_t| ·d/log m·logm^n/|B_t|, it follows from (<ref>) that a sufficient condition is y_G=d. We will show below that, for large enough n, this is also a necessary condition. Suppose that G is an m-vertex, d-regular graph with y_G < d, let k^* ∈{2, …, m-1} be the index defined above, and let S ⊆ V(G) be a k^*-element set witnessing e_G(S, S^c) = |S| · i_k^*(G). Fix a small positive ε. By Dirichlet's approximation theorem, there exist positive integers s and t such that | s log m - t log (m/k^*) | ≤ε/2, which implies that (1-ε)m^t ≤ (k^*)^t · m^s ≤ (1+ε)m^t. Consider the graph G^s+t and sets of vertices A S^t × V(G)^s and B {u}^s × V(G)^t. Note that, by (<ref>), e_(A, A^c) = |A| ·y_G/log m· t log(m/k^*) ≤ |A| · (s y_G + ε) ≤ m^t · (1+ε)(sy_G+ε). Let C be a set of size exactly m^t that is obtained by adding to / removing from A at most ε m^t vertices in an arbitrary manner. Since Δ() = (s+t)d, we clearly have e_(C, C^c) - e_(A, A^c) ≤ε m^t · (s+t)d ≤ε m^t s d (1 + log m + ε/2/log(m/k^*)), where the second inequality follows from (<ref>). Since we assumed that y_G < d, it is clear that choosing ε sufficiently small (as a function of m and d-y_G only) gives e_(C, C^c) < m^t sd = e_(B, B^c). This means that the set B does not have the smallest edge boundary among all sets of m^t vertices of . abbrv
http://arxiv.org/abs/2407.02559v1
20240702180001
A Max-Flow approach to Random Tensor Networks
[ "Khurshed Fitter", "Faedi Loulidi", "Ion Nechita" ]
quant-ph
[ "quant-ph", "hep-th", "math-ph", "math.MP", "math.PR" ]
khurshed.fitter@epfl.ch Ecole Polytechnique Federal de Lausane, Switzerland faedi-loulidi@oist.jp Okinawa Institute of Science Technology,Okinawa, Japan nechita@irsamc.ups-tlse.fr Laboratoire de Physique Théorique, Université de Toulouse, CNRS, UPS, France § ABSTRACT We study the entanglement entropy of a random tensor network (RTN) using tools from free probability theory. Random tensor networks are simple toy models that help the understanding of the entanglement behavior of a boundary region in the ADS/CFT context. One can think of random tensor networks are specific probabilistic models for tensors having some particular geometry dictated by a graph (or network) structure. We first introduce our model of RTN, obtained by contracting maximally entangled states (corresponding to the edges of the graph) on the tensor product of Gaussian tensors (corresponding to the vertices of the graph). We study the entanglement spectrum of the resulting random spectrum along a given bipartition of the local Hilbert spaces. We provide the limiting eigenvalue distribution of the reduced density operator of the RTN state, in the limit of large local dimension. The limit value is described via a maximum flow optimization problem in a new graph corresponding to the geometry of the RTN and the given bipartition. In the case of series-parallel graphs, we provide an explicit formula for the limiting eigenvalue distribution using classical and free multiplicative convolutions. We discuss the physical implications of our results, allowing us to go beyond the semiclassical regime without any cut assumption, specifically in terms of finite corrections to the average entanglement entropy of the RTN. A Max-Flow approach to Random Tensor Networks Ion Nechita July 8, 2024 ============================================= § INTRODUCTION The ADS/CFT correspondence consists in describing a quantum theory (more precisely a conformal field theory) as lying on the boundary of an anti de Sitter space-time geometry <cit.>. Many particular features of this correspondence remains mysterious, in particular the link with quantum information theory with entanglement. It was shown in <cit.>, that for a fixed time slice, the entanglement behaviour of a given region of the boundary quantum theory is proportional to the minimal hypersurface bulk area homologous to the region of interest known as the Ryu-Takayanagi entanglement entropy. The Ryu-Tkayanagi formula shows in the context of ADS/CFT a crucial link between the entanglement behaviour of an intrinsic quantum theory and its link with the bulk gravitational field. This results open a new point on the understanding quantum gravity in the ADS/CFT framework from the perspective of entanglement and quantum information theory. We refer to <cit.> and the reference therein for a complete introduction. The difficulty of computing the entanglement properties of boundary quantum theories has led to the development of attractable simple models particularly the tensor network and random tensor network frameworks. Initially, the tensor network framework started as “good” models approximating ground states in condensed matter physics. In the context of condensed matter physics, tensor networks represent ground states of a class of gapped Hamiltonian <cit.>. Moreover, tensor networks have paved the way to understanding different physical properties such as the classification of topological phases of matter. We simply refer to <cit.> for an extensive review of all the different applications. Recently other extensions of tensor networks to random tensor networks for studying random matrix product states or projected entangled pairs of states were introduced in <cit.>. However, the random tensor network (or simply RTN) was initiated in <cit.> as toy models reproducing the key properties of the entanglement behaviour in the ADS/CFT context <cit.>. Moreover the random tensor network framework appears in different active areas from condensed matter physics in the random quantum circuits and measurement framework <cit.>. In general, a random tensor network (or simply RTN) will consist of defining random quantum states from a given fixed graph structure, as we shall describe in the following lines. The main problem consists of computing the average entanglement as D→∞, where D plays the dimension of the Hilbert space of the model, behaviour of the state associated with a given fixed subregion of the graph. From Different results have been established allowing the understanding of the entanglement entropy of the RTN models as toy models mimicking the entanglement behaviour in quantum gravity. Different work has been explored in the literature where the entanglement entropy as D→∞ scales as the number of minimal cuts needed to separate the region of interest from the rest of the graph times log D <cit.>. Moreover one should mention that several directions have been explored to go beyond the toy model picture of the (random) tensor network <cit.>. In this work, we will focus on a general random tensor network from a maximal flow approach. The use of the maximal flow approach was already explored in <cit.> to compute the entanglement negativity and in <cit.> to derive the Ryu-Takayanagi entanglement entropy in the continuum setting. As was described in the previous paragraph the model consists of defining a random quantum state from a given fixed graph structure. In our model, we shall consider a graph with edges (bulk edges) and half edges (boundary edges). The use of bulk and boundary edges will become clear from the definition of the model. We shall associate to each half edge a finite dimensional Hilbert space ^D and for each edge a Hilbert space (^D)^⊗ 2. The edges Hilbert space will generate a local Hilbert space associated to each vertex of the graph. In order to define an RTN, one should associate to each components of the graph quantum state generated as random. For that, we will generate for each vertex a random Gaussian state and we shall associate to each of the edges maximally entangled state. A random tensor network is defined by projecting all the maximally entangled states associated with the graph on the total random states generated in each vertex. The obtained random tensor network lies in the full boundary Hilbert space. The main goal of this work is to consider a sub-boundary region A of the graph and evaluate the entanglement behaviour of the associated residual state as D→∞. The first computation is to estimate the moment computation of the state associated to the region A as D→∞. With the help of the maximal flow, that we will develop in this work in full details, we are able to estimate the moments without any cuts assumption and shows that converges to the moment of a graph-dependent measure. We will show if the obtained partial order is series-parallel, and with the use of free probability theory, we are able to explicitly construct the measure associated to the graph without any cut assumption. Moreover, we will show the existence of higher order correction terms of the entanglement entropy given with graph-dependent measure which can be explicitly given if the partial-order is series-parallel. We will show in different example how one can compute explicitly the measures associated to the initial graph in the case if the obtained partial order is series-parallel. The link between quantum information theory, free probability and random tensor network was already explored in <cit.> with the use of a general link state representing the effect of bulk matter field in the ADS/CFT context which allows to go beyond the semiclassical regime with correction terms of the entanglement structure. Moreover, the obtained results in <cit.> assumes the existence of two disjoint minimal cuts separating the region A from the rest. In this work, we only work with maximally entangled states in the bulk edge of the model. Without any cut assumption, we do obtain higher order correction terms which we may interpret as intrinsic fluctuations. In the context of ADS/CFT those are intrinsic to the quantum spacetime nature of bulk gravitational field without any bulk matter field. This work is organised as follows. In Section <ref>, we will give a summary of our work by presenting all the main results. In Section <ref> we will introduce our random tensor network framework. In Section <ref>, we will give the moment computation of a given state ρ_A associated to a given suboudary region of the graph A. In Section <ref>, with the help of the maximal flow approach, we will compute the asymptotic scaling of the moments and show the convergence to a measure given by a graph-dependent measure. In Section <ref>, we will introduce the notion of series-parallel partial order with the help of free probability we will show explicitly how one can construct a graph-dependent measure with free product convolution and classical measure product. In Section <ref>, we will give various examples of random tensor networks and show explicitly the associated obtained measure in the case of an obtained partial order is series-parallel. In Section <ref>, we will give the main technical results, with the help of concentration inequalities we will show the obtained higher-order entanglement correction terms, without any cut assumption, are graph-dependent moreover the graph dependent measure can be explicitly constructed if the partial order is series-parallel. § MAIN RESULTS In this section, we will introduce the main definitions and main results obtained in this work. This work will consist of computing the entanglement entropy of a given random tensor network. We shall consider the most general framework of random tensor networks and study the entanglement structure of the random tensor, concerning a fixed bipartition of the total Hilbert space. By addressing the problem using a network flow approach, we can compute the leading term, plus higher order correction terms of the entanglement entropy which are graph dependent. The higher order correction terms plays a crucial role in different areas, particularly in the context of ADS/CFT <cit.> as we shall comment after we give the main results. One can informally summarize the key results of this work as follows: In the limit of large local Hilbert space dimension D, the average Rényi entanglement entropy of a RTN G, across a given bipartition (A|B), has: * a dominating term of the form maxflow(G_A|B) ·log D * a finite correction term which is graph dependent. In the case where G_A|B is a series-parallel graph, we can compute the distribution of the entanglement spectrum (and hence the finite entropy correction) as an iterative classical and free convolution of Marc̆henko-Pastur distributions. A random tensor network has a corresponding random quantum state |ψ_G⟩ that encodes the structure of a graph G. For that, we shall introduce the graph G and some terminology. We refer to Section <ref> for more technical details and definitions of the model. Let G=(V, E) be a connected undirected finite graph with (full) edges and half edges; the former encode the internal entanglement structure of the quantum state |ψ_G⟩, while the latter represents the physical systems (Hilbert spaces) on which |ψ_G⟩ lives. We shall denote by E_b and E_∂ the set of edges (bulk edges) and half edges (boundary edges) respectively. Formally the set of edges and half edges are respectively given by E_b:={e_x,y | e_x,y=(x,y): x,y∈ V} and E_∂:={e_x=(x,·): x∈ V} where E:=E_b⊔ E_∂. Then, the corresponding random tensor |ψ_G⟩ is defined as |ψ_G⟩:=⟨⊗_e∈ E_bΩ_e | ⊗_x∈ V g_x ⟩∈(^D)^⊗|E_∂|, where |g_x⟩ are random Gaussian states defined in the local Hilbert space of each vertex x. Moreover, for each (full) edge e∈ E_b, we associate a maximally entangled state |Ω_e⟩∈^D ⊗^D that is used to contract the internal degrees of freedom of the tensor network. For a representation of a random tensor network see Figure <ref>, which we will treat in great detail for an illustration of our different main results of this work. We refer to Definition <ref> for more details. As was mentioned earlier, this work aims to evaluate the entanglement entropy of the random quantum state |ψ_G⟩, along a bi-partition A|B of the boundary edges E_∂ = A ⊔ B. We shall do so in the limit of large local Hilbert space dimension D →∞. To evaluate the entanglement entropy of the pure state |ψ_G⟩, we shall compute its asymptotic entanglement spectrum along the bi-partition A|B, that is the limiting spectrum of the density matrix ρ_A=_B |ψ_G⟩⟨ψ_G|. From this spectral information, we can deduce the average Rényi entanglement and von Neumann entropies for the approximate normalised state ρ̃_A respectively given by: ρ̃_A:=D^-|E_∂|ρ_A→lim_D→∞ S_n(ρ̃_A) with S_n(ρ):=1/1-nlog(ρ^n), lim_D→∞ S(ρ̃_A) with S(ρ):=-(ρlogρ). Above, the expectation is taken with respect to the Gaussian distribution of the independent random tensors |g_x⟩ present at each vertex of the graph. It will be clear from Section <ref> the use of approximate normalised state instead of a “true" normalised state ρ̃_A:=ρ_A/ρ_A. We first compute exactly the moments of the random matrix ρ_A and then we analyze the main contributing terms at large dimensions by relating the problem to a maximum flow question in a related graph. By the use of the maximal flow and tools from free probability theory, we will able to derive the leading and the fluctuating terms of the Rényi entropy and then deduce the behaviour of the von Neumann entanglement entropy. Moment computation We shall first consider the normalised state ρ̃_A:=ρ_A/ρ_A and compute the moments. For the first step, we use the graphical Wick formula from <cit.> to find [ (ρ_A^n)]=∑_α=(α_x)∈𝒮_n^|V| D^n|E_∂|-H_G^(n)(α), ∀ n∈ℕ where H_G^(n)(α) can be understood as the Hamiltonian of a classical “spin system”, where each spin variable takes a value from the permutation group 𝒮_n: H_G^(n)(α):=∑_(x,·) ∈ A|γ^-1_xα_x|+∑_(x,·)∈ B|𝕀^-1_x α_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. Above, we associate to the region B, the identity permutation 𝕀_x ∈𝒮_p (corresponding to taking the partial trace over B), and to the region A the full-cycle permutation γ_x = (n n-1 ⋯ 2 1) (corresponding to the trace of the n-th power of ρ_A). We refer to Proposition <ref> in Section <ref> for a more precise statement and proof. One should also mention that the contribution of the normalisation term of ρ̃_A will be given by: [(ρ_A)^n]=∑_α=(α_x)∈𝒮_n^|V| D^n|E_∂|-h_G^(n)(α), ∀ n∈ℕ where h_G^(n)(α):=∑_(x,·) ∈ E_∂|𝕀^-1_xα_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. Remark above that h_G^(n)(α) is simply H_G^(n)(α) with A=∅. See Proposition <ref> for more details. Note that in the particular case n=2, the authors of <cit.> gave an exact mapping to the partition function of a classical Ising model. Notice the frustrated boundary conditions of the Hamiltonian above: vertices connected to the region A prefer the configuration α_x = γ_x, while vertices connected to the region B prefer the low energy state α_x = 𝕀_x. Maximal flow. The (max)-flow approach will consist of identifying the leading terms from the moment formula above as D→∞. For that, we introduce a network G_A|B, derived from the original graph G, by connecting all the half-edges in A to an extra vertex γ (sink) and all the half-edges in B to 𝕀 (source). In G_A|B, the vertices are valued in the permutation group 𝒮_n and all the half edges are connected either to the source 𝕀 or to the sink γ. The flow approach will consist by looking at the different paths starting from the source 𝕀 to the sink γ. The different paths in the flow approach will induce an ordering structure more precisely a poset structure in the network G_A|B. Intuitively the maximal flow will consist of searching of the maximal number of such paths such that if on take them off the source and the sink will be not anymore connected. More precisely, by Menger's theorem, the maximum flow in this graph is equal to the number of edge-disjoint augmenting paths that start from the source 𝕀 and end in the sink γ. Figure <ref> represents the different paths achieving the maximal flow in the network G_A|B from the original graph G as represented in Figure <ref>. This procedure allows us to find a lower bound to the Hamiltonian H_G_A|B^(n)(α) that can be attained by some choice of the variables α_x. A For all n ≥ 1, we have min_α∈𝒮_n^|V| H_G_A|B^(n)(α)=(n-1)maxflow(G_A|B), where H_G_A|B^(n)(α) is the extended Hamiltonian in the network G_A|B. Once one takes out all the augmenting paths achieving the maximum flow in G_A|B, one is left with a clustered graph G_A|B^c that is obtained by clustering all the remaining connected components (see Figure <ref>). Importantly, it follows from the maximality of the flow that in this clustered graph, the cluster-vertices [𝕀] and [γ] are disjoint. We refer to Proposition <ref> for more details and the proof of the result above. As a direct consequence of the result above, one can deduce the moment convergence as D→∞, we refer to Theorem <ref> for more details of the following result. B In the limit D →∞, we have, for all n ≥ 1, lim_D→∞1/D^F(G_A|B)[((D^F(G_A|B)-|E_∂| ρ_A)^n)]=m_n where m_n are the moments of a probability measure μ_G_A|B and F(G_A|B)=maxflow(G_A|B). Moreover one can show the normalisation term converges to 1 as shown in Corollary <ref>. The previous maximum flow computation gives the first order in the formula for the average entanglement entropy of random tensor network states: 𝔼[S_n(ρ̃_A)] ≈maxflow(G_A|B) ·log D ∀ n ≥ 1. Free probability theory and entanglement Our main contribution in this work is to show that one can find the second order (or the finite corrections) of the Rényi and von Neumann entanglement entropy by carefully analyzing the set of augmenting paths achieving the maximum flow in the graph G_A|B. Once the different paths achieve the maximal flow in the graph G_A|B, after the clustering operation we obtain an partial order G_A|B^o where the vertices are the different permutation clusters formed from the clustered graph G_A|B^c. See Figure <ref> of the obtained partial order from the original graph G in Figure <ref>. Our results are general, and they become explicit in the setting of the partial order G^o_A|B is series-parallel. With the help of free probability theory, we are able in this setting to deduce the second-order correction terms of each of the Rényi and von Neumann entropy. A graph G is called series-parallel if it can be constructed recursively using the following two operations: * Series concatenation: G=H_1 H_2 is obtained by identifying the sink of H_1 with the source of H_2. * Parallel concatenation: G=H_1 H_2 obtained by identifying the sources and the sinks of H_1 and H_2. To a series-parallel graph G we associate a probability measure μ_G, defined recursively as follows. * To the trivial graph G_triv = ({s,t}, {{s,t}}), we associate the Dirac mass at 1: μ_G_triv := δ_1. * Series concatenation: μ_G H := μ_G ⊠⊠μ_H.[d:=1/2π√(4t^-1-1) dt is the Marc̆henko-Pastur distribution and ⊠ is the free convolution product. We refer to Appendix <ref> for more details.] * Parallel concatenation: μ_G H := μ_G ×μ_H. C In the limit D →∞, the average Rényi entanglement entropy ∀ n ≥ 1 and von Neumann entropy of an approximate normalised state ρ̃_A:=D^-|E_∂|ρ_A behaves respectively as 𝔼[S_n(ρ̃_A)] = maxflow(G_A|B) ·log D - 1/n-1log∫ t^n dμ_G_A|B(t) + o(1) [S(ρ̃_A)] =maxflow(G_A|B^o) ·log D -∫ t log t dμ_G_A|B+o(1). We refer to Corollary <ref> for more details and the proof of the above statements. In particular if the obtained partial order G_A|B^o is series-parallel the measure μ_G_A|B=μ_G_A|B^o can be explicitly constructed, we refer to Theorem <ref> for more details. The use of the approximate normalised state instead of “the” normalised state ρ̃_A:=ρ_A/ρ_A will be justified from the concentration result of ρ_A in Subsection <ref>. It was previously argued in <cit.> if one wants to encode the quantum fluctuations one needs to use instead of a maximally entangled state a general “link state" |ϕ_e⟩ defined by: e∈ E_b→|ϕ_e⟩:=∑_i=1^D√(λ_e,i)|i_x,i_y⟩. It was recently shown in <cit.> that the non-flat spectra of the link state under the existence assumption of two non-disjoint cuts that one obtains the quantum fluctuations beyond the semiclassical regime in ADS/CFT. The use of a generic link state in the context of ADS/CFT represents the bulk matter field contribution. In this work with the maximal flow approach, we were able to show the existence of quantum fluctuations without any minimal cut assumption and with maximally entangled state as link state. The obtained higher order correction terms in our context can be interpreted as the “intrinsic" quantum fluctuations of spacetime geometry without any bulk matter field in the bulk represented by a general link state. For example, in the case of the graph represented in Figure <ref>, the resulting partial order G_A|B^o is series-parallel (see Figure <ref>) where: G_A|B^o=G_1 G_2 G_3 with μ_G_A|B^o= μ_G_1⊠⊠μ_G_2⊠⊠μ_G_3 = μ_G_1⊠^⊠ 2, as represented in Figure <ref> the graph G_2 and G_3 are trivial hence μ_G_2 = μ_G_3 = δ_1. The graph G_1 can be factored as a parallel composition of two other graphs as represented in Figure <ref>: G_1=G_5 G_4 with μ_G_1 = μ_G_4×μ_G_5. The graph G_4 as represented in Figure <ref> factorises as: G_4 = ( G_6 G_7 ) G_8 with μ_G_4 = ( μ_G_6×μ_G_7) ⊠⊠μ_G_8=( ×) ⊠ where we have used the fact that G_6 and G_7 are series compositions of two trivial graphs, so μ_G_6 = μ_G_7 =, while μ_G_8 = δ_1. Moreover the graph G_5 as represented in Figure <ref> factorises as: G_5 = G_9 G_10( G_11 G_12) G_13 with the associated measure μ_G_5 = μ_G_9⊠⊠μ_G_10⊠⊠( μ_G_11×μ_G_12) ⊠⊠μ_G_13=^⊠ 3⊠( ^⊠ 2×), where we have used iteratively the series composition for G_11 and G_12 with their respective measure given by μ_G_11 = ^⊠ 2 and μ_G_12 =. In the case of random tensor network represented in Figure <ref> the partial order is series-parallel with the associated measure: G_A|B^o=G_1 G_2 G_3, with μ_G_A|B^o={[ ^⊠ 3⊠ (^⊠ 2×) ] ×[ (×) ⊠] }⊠^⊠ 2, which is obtained by combining all the results stated above. If one considers the minimal cuts associated with the network G_A|B (see Figure <ref>) as represented in Figure <ref> where we have considered four ways[We have only represented four cuts for simplicity. Remark in Figure <ref> we have more than four minimal cuts which may share a common edge.] achieving the minimal cuts crossing common edges, therefore intersects. § RANDOM TENSOR NETWORKS In this section, from a given graph with edges (bulk edges) and half edges (boundary edges), we will introduce random tensor network model. For that for each edge and half edge of the graph, we will associate a Hilbert space. The edge Hilbert space will induce a local Hilbert space for each vertex in the graph. We will associate to each of the vertices a random Gaussian state, and to each edge a maximally entangled state. The random tensor network is defined by projecting all the maximally entangled state associated to all edges of the graph over the vertex states given by the tensor product of all the random Gaussian vectors. This section aims to introduce the main definitions of the model and recall the different entanglement notions. In Subsection <ref>, we shall introduce our random tensor network model. In Subsection <ref>, we recall the different entanglement notions and their properties. §.§ Random tensor network In the following, we shall give the construction of the random tensor network model. Let G=(V, E) be a bulk connected undirected finite graph with edges and half edges. We shall denote by E_b and E_∂ the set of edges and half edges respectively. Formally the set of edges and half edges are defined as follows E_b :={e_x,y | e_x,y=(x,y): x,y∈ V}, E_∂ :={e_x=(x,·): x∈ V}, E :=E_b⊔ E_∂. For later discussion, the set of edges E_b and half edges E_∂ we shall call them the set of bulk and boundary edges. The bulk connectivity assumes that all the vertices in the bulk region of the graph are connected; this is the same notion as the “connected network” property from <cit.>. We denote by |E_b|, |E_∂| and |E|=|E_b|+|E_∂| the cardinality of the bulk, boundary and the total edge set. For each half-edge on a given vertex in the graph, we shall associate a Hilbert space ^D, and for each bulk edge connecting two vertices, we associate ^D⊗^D for finite D known as the bond dimension. We will define a random Gaussian to each vertex of the graph state that lies in the local Hilbert space associated to each vertex. Moreover, on each edge of the graph, we associate a maximally entangled state. The random tensor network is a random quantum state constructed by projecting the total tensor product of the random Gaussian state for each vertex over all the maximally entangled state formed in bulk edges (see Definition <ref>). Formally, for each part of the graph G we shall associate to each part of the graph Hilbert spaces where: * For each half-edge defined on a vertex x, we associate a finite-dimensional Hilbert space ℋ_e_x: e_x∈ E_∂ →ℋ_e_x:=^D E_∂ →ℋ_∂:=⊗_e_x∈ E_∂ℋ_e_x, * For each edges e_x,y∈ E_b we shall associate Hilbert space ℋ_e_x,y: e_x,y∈ E_b→ℋ_e_x,y:=^D⊗^D, where ℋ_e_x,y denote the Hilbert space connecting the two vertices x and y. * For each vertex x∈ V, we define the local vertex Hilbert space ℋ_x where: x∈ V →ℋ_x:=⊗_E ∋ e → xℋ_e V →ℋ_V:=⊗_x∈ Vℋ_x = ⊗_x ∈ V⊗_E ∋ e → xℋ_e, where the Hilbert space ℋ_x represents the local Hilbert space associated with a vertex x defined as all the edges of Hilbert space that contribute locally. Having defined the general Hilbert space structure associated with a generic graph G, in the following, we shall define quantum states in the graph G which will allow us to introduce the random tensor network model. By construction let for each: * Vertex x a random quantum state |g_x⟩∈ℋ_x sampled from an i.i.d Gaussian distribution: x∈ V →|g_x⟩∈ℋ_x V →⊗_x∈ V|g_x⟩∈ℋ_V * Bulk edge e_x,y a maximally entangled state |Ω_e⟩ given by: e_x,y∈ E_b →ℋ_e_x,y e_x,y →|Ω_e⟩:=1/√(D)∑_i=1^D|i_x,i_y⟩, where we have used the notation |i_x⟩ and |i_y⟩ for the state associated to the vertex x sharing an edge with y. A random tensor network |ψ_G⟩ is defined as a projection of the vertex state over all the maximally entangled states |Ω_e⟩ for each e_x,y in E_b where: |ψ_G⟩:=⟨⊗_e∈ E_bΩ_e | ⊗_x∈ V g_x ⟩∈(^D)^⊗|E_∂|. One should mention that the following example will be used in all other parts of this work as an illustration of the different results obtained in each section. As an illustration of a random tensor, see Figure <ref>, where the boundary region of G are all the half edges E_∂:={e_1,e_15,e_7,e_8,e_9,e_10,e_11,e_14}. We shall mention that in Figure <ref>, the region A are the half edges in the vertices {9,10,11,14},i.e A:={e_9,e_10,e_11,e_14}. The complementary region B:=E_∂∖ A are half edges associated to the vertices {1,15,7,8}, where B:={e_1,e_15,e_7,e_8}. We shall also mention that our construction of the random tensor network, the edges, and the half edges generate the vertex Hilbert space ℋ_x. Other types of random tensor network models were already explored in the literature see <cit.> and the reference therein. In the models mentioned previously, at first, they define the bulk and boundary vertices while in our work the focus is on the edges and the half edges which generates the local Hilbert space for each vertex, and the bulk states are given by a maximally entangled state. The first initial work in the random tensor network was in <cit.> where the aim was to compute the entanglement entropy of subregion of the random tensor network which is proportional as the bond dimension tends to infinity to the minimal cuts of the graph reproducing the famous Ryu-Takayanagi entanglement entropy <cit.> in a discrete version. In a recent work <cit.>, the authors associate a state with a general “link" state connecting two bulk vertices, therefore generalizing the previous models where they allowed the existence of two non-crossing minimal cuts. This result allows the authors to compute higher-order correction terms of the entanglement entropy. The main goal of this work, with the maximal flow approach without any minimal cut assumption, we will be able to derive the higher order correction terms with a maximally entangled state connecting the bulk vertices. §.§ Entanglement In the following, we shall recall different entanglement notions used in quantum information theory in particular von Neumann entropy and Rényi entropy. The von Neumann entropie for a given normalised quantum state ρ defined as S(ρ):=-(ρlogρ). In general, in physical systems with an exponential number of degrees of freedom it is in general difficult to compute it. There exists a generalisation where we do not need to diagonalise the density matrix ρ. This definition is due to Renyi which is known as the Renyi entropy defined as: S_n(ρ):=1/1-nlog(ρ^n), where it is well known that as n→ 1 the Renyi entropy converges to von Neumann entropy. The definitions given above are for normalised quantum states, if the state is not normalised one should normalise it first and then compute the entropy. Now, we mention a bit about a subtlety regarding the upper bound on the rank of the reduced density matrix induced by the minimal cut. A minimal cut consists of finding the minimal number of edges in a graph that need to be removed to fully separate to a given fixed region of the graph. Although it is trivial to see that the rank of the reduced density matrix ρ_A is upper bounded by the local dimension D raised to the number of edges in the set A, that is, (ρ_A) ≤ D^| A |. However, there exists a subtlety. The rank of the reduced density matrix is, in fact, upper bounded by the minimum number of connecting edges or the bottleneck (min-cut) and not the number of edges: (ρ_A)≤ D^F_A, where, F_A is the min-cut or the number of edges in the “bottleneck". Now, we demonstrate this more clearly using an example. Consider a state |ψ_G⟩, which we can use to construct ρ_A as shown below. Now, consider the internal structure of |ψ_G⟩, where we divide the graph into two subgraphs denoted by L and R, connected by the “bottleneck" which is the set of all edges which when removed would disconnect the boundary sets A and B. Now, it is clear that (ρ_A) ≤ D^F_A, where, in this case F_A = 2, and consequently, S(ρ_A) ≤ F_A log D. Having established the natural intuition for the role of the min-cut (F_A) in upper-bounding the entropy, we now move on to establish our (maximal) flow approach for the random tensor network in the following sections. § MOMENT COMPUTATION From a given random tensor network, we want to understand the behaviour of entanglement of a given subregion of the tensor network with the rest. For that we shall adress at first the moment computation of quantum state ρ̃_A for a given subregion A⊆ E_∂. This first computation will allows us in the following sections to analyse the Renyi and the von Neumann entropy. Let A⊆ E_∂ be a sub-boundary region of the graph G. We shall denote by B:=E_∂∖ A the complementary region of A. Let ℋ_A:=⊗_e_x∈ Aℋ_e_x and ℋ_B:=⊗_e_x∈ Bℋ_e_x respectively the Hilbert space associated to the boundary regions A and B. In this work, we will be interested in computing the average entanglement entropy at large bond dimension: ρ̃_A:=ρ_A/ρ_A→lim_D→∞ S_n(ρ̃_A) lim_D→∞ S(ρ̃_A), where ρ̃_A is the normalised quantum state obtained by tracing out the region B, i.e ρ_A=_B|ψ_G⟩⟨ψ_G| where the partial trace over the Hilbert space ℋ_B. In the expression above, the average is over all the random Gaussian states. The first computation that we shall adress here is the moment computation as described in the following proposition. This will allow us later, as analysed in detail in the following sections, to compute the average entanglement entropy (Rényi and von Neumann entropy) as D→∞. The result above has been previously obtained in a very similar setting by Hastings <cit.>. For any A⊆ E_∂, we have [ (ρ_A^n)]=∑_α=(α_x)∈𝒮_n^|V| D^n|E| - n|E_b|-H_G^(n)(α), ∀ n∈ℕ where H_G^(n)(α) can be understood as the Hamiltonian of a classical “spin system”, where each spin variable takes a value from the permutation group 𝒮_n: H_G^(n)(α):=∑_(x,·) ∈ A|γ_x^-1α_x|+∑_(x,·)∈ B|𝕀_x^-1α_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. Before giving the proof of the proposition above, we shall recall some properties of the permutation group 𝒮_n and fix some notations. We denote by γ_x the total cycle in the permutation group 𝒮_n evaluated in (x,·)∈ A ∀ (x,·)∈ A, γ_x=(n… 1). We recall that one can define a notion of distance in 𝒮_n known as the Cayley distance given by 𝒮_n×𝒮_n →^+ d:(α_i,α_j) → d(α_i,α_j):=n-#(α_i^-1α_j), where #(α) stands for the number of cycles in α. The distance d(α_i,α_j) gives the minimum number of transpositions to turn α_i to α_j. In general the distance in 𝒮_n satifies the triangle inequality where: d(α_i,α_j)≤ d(α_i,σ)+d(σ,α_j). In particular, we say that σ is a geodesic between α_i and α_j in 𝒮_n if d(α_i,α_j)=d(α_i,σ)+d(σ,α_j). We shall adopt the following notation for the distance instead of d(·,·) where (α_i,α_j)∈𝒮_n×𝒮_n, d(α_i,α_j)=|α_i^-1α_j|. To prove the result announced in the proposition, one should remark first that we can write the trace on the left-hand side of equation (<ref>) with the well known replica trick as: (ρ_A^n)=(|ψ_G⟩⟨ψ_G|^⊗ nU_γ_A⊗𝕀_B), The trace in the left-hand is on ℋ_A that one rewrite as a full trace one n copy of the full Hilbert space, bulk and boundary Hilbert space, in the right-hand side of the equation above. Remark that we have used the notation U_γ_A=⊗_(x,·)∈ AU_γ_x the tensor product of unitary representation of the permutation γ_x=(n… 1)∈𝒮_n for each half edges (x,·)∈ A. By expanding and taking the average over random Gaussian states one obtains: (ρ_A^n) =([|ψ_G⟩⟨ψ_G|^⊗ n] U_γ_A⊗𝕀_B) =(⊗_e∈ E_b|Ω_e⟩⟨Ω_e|^⊗ n[⊗_x∈ V|g_x⟩⟨g_x|^⊗ n] U_γ_A⊗𝕀_B) =(⊗_e∈ E_b|Ω_e⟩⟨Ω_e|^⊗ n⊗_x∈ V[|g_x⟩⟨g_x|^⊗ n] U_γ_A), where in the last equation above, we have used the shorthand notation U_γ_A instead of U_γ_A⊗𝕀_B. We recall the following property of random Gaussian states see <cit.>: ∀ x∈ V, [|g_x⟩⟨g_x|^⊗ n]=∑_{α_x}∈𝒮_nU_α_x, with U_α_x the unitary representation of α_x∈𝒮_n. Each permutation α_x∈𝒮_n acts on each vertex Hilbert, hence implicitly on each edges associated to each vertex x∈ V. Therefore, the moments' formula becomes: (ρ_A^n) =∑_{α_x}∈𝒮_n[⊗_e∈ E_b|Ω_e⟩⟨Ω_e|^⊗ n⊗_x∈ VU_α_x U_γ_A] =D^-n|E_b|∑_{α_x}∈𝒮_n∏_(x,·)∈ AD^#(γ_x^-1α_x) ∏_(x,·)∈ BD^#(𝕀_x^-1α_x)∏_(x,y)∈ E_bD^#(α_x^-1α_y), where the formula above counts the number of loops obtained by contracting the maximally entangled states (edges) when one takes the trace. The factor of D^-n|E_b| appears due to the consequence of contracting the bulk edges, where each bulk edge contracted with itself, contributes a factor of D^-1. By using the relation between the Cayley distance and the number of loops, we obtain the result in the statement of the proposition. Graphically, one can understand the formula using Figure <ref> where we consider the case for n=3. Upon utilizing the graphical integration technique for Wick integrals as presented in <cit.>. We obtain loops and, consequently, Cayley distances of three kinds, (a) between 𝕀_x and elements directly connected to it, from the region B, (b) between γ_x and elements directly connected to it, from the region A and (c) elements neither directly connected to 𝕀 nor γ, from the bulk. Following this, we can rewrite the Hamiltonian in terms of Cayley distances as: H_G^(n)(α):=∑_(x,·) ∈ A|γ^-1_xα_x|+∑_(x,·)∈ B|𝕀^-1_x α_x|+∑_(x,y)∈ E_b|α_x^-1α_y|, where (x,·)∈ A represents half-edges in A, (x,·)∈ B, represents half-edges in B and (x,y)∈ E_b represents edges in the bulk of the tensor network. In the proposition above, we have addressed only the numerator term of the normalised quantum state ρ̃_A. However if one wants to compute the von Neumann and Rényi entropy (see equations (<ref>) and (<ref>)), one should normalise the state and compute the moment. The following proposition gives the moment computation of the normalisation term in ρ̃_A. For any A⊆ E_∂, we have [(ρ_A)^n]=∑_α=(α_x)∈𝒮_n^|V| D^n|E| - n|E_b|-h_G^(n)(α), ∀ n∈ℕ where the Hamiltonian h_G^(n)(α) is given by: h_G^(n)(α):=∑_(x,·) ∈ E_∂|𝕀_x^-1α_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. The proof of this Proposition is a direct consequence of Proposition <ref> when one takes A=∅, hence we obtain h_G^(n)(α) in the particular case when A=∅ in H_G^(n)(α). § ASYMPTOTIC BEHAVIOUR OF MOMENTS This section will consist of describing the leading contributing terms as D→∞ of the moment by using the (maximal)-flow approach. We will first introduce the (maximal)-flow approach wich will allows us to estimate the leading terms of the moments as D→∞ we refer to Proposition <ref> for more details. This result will allow us to deduce the convergence of the moment as D→∞ to moments of a graph dependent measure μ_G_A|B we refer to Theorem <ref> for more details. We recall first the obtained results from the previous section. In Proposition <ref> we have shown that the moments are given by: (ρ_A^n)=∑_α=(α_x)∈𝒮^|V|_nD^n|E| - n|E_b| -H_G^(n)(α) where the spin valued Hamiltonian in the permutation group 𝒮_n is given by: H_G^(n)(α):=∑_(x,·) ∈ A|γ_x^-1α_x|+∑_(x,·)∈ B|𝕀_x^-1α_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. In particular, the contribution of the normalisation term in ρ̃_A (see equation (<ref>)) is the extended Hamiltonian h_G^(n)(α) as shown in Proposition <ref> when one takes A=∅ in H_G^(n)(α). The main goal of this section, will consist on analysing the main contributed terms of the moment as D→∞. The leading terms will consist on solving the minimisation problem: min_α∈𝒮_n^|V|H_G^(n)(α). Particularly as a consequence, we will minimize h_G^(n)(α) which will give us the leading contributed term as D→∞ of the normalisation term of ρ̃_A. The minimisation problem addressed above, will allow us to deduce the moment convergence as D→∞ to the moment of graph dependent measure μ_G_A|B in Theorem <ref>. The minimisation problem above will be addressed with the (maximal)-flow approach. This approach will consist first by constructing from the original graph G a network G_A|B. This network is constructed by adding first two extra vertices γ and 𝕀 to G in such a way that all the half edges associated to A are connected to the total cycle γ, and half edges in B are connected to 𝕀. The network G_A|B has the same bulk structure of G, with the difference that all the vertices in G_A|B are valued in the permutation group 𝒮_n. The flow approach will consist on searching of different augmenting paths in the network G_A|B that will start from 𝕀 and ends to γ. This different paths will induce an order structure in G_A|B. By taking off all the augmenting paths in G_A|B, we can find a lower bound of H_G_A|B^(n)(α) the extended Hamiltonian in the network G_A|B, we refer to Proposition <ref> for more details. Moreover, we will show that the minimum will be attained when the maximal flow starting from 𝕀 to γ is achieved, see Proposition <ref>. In particular we will show that the minimum of the extended Hamiltonian h_G_A|B^(n)(α) is zero, see Proposition <ref> for more details. Before we start with our flow approach, one should mention that the contributed terms of the moments at large dimension were analysed with the (minimal) cut approach in <cit.>. The authors assumed the existence of two disjoint minimal cut in the graph separating the region of interest and the rest of the graph that will contribute in large bond dimension. With the maximal flow approach, that we will introduce, we do not assume any (minimal) cut assumption. By identifying different augmenting paths achieving the maximal flow and uses the famous maximal-flow minimal-cut theorem (see e.g. <cit.>) one can deduce the different minimal cuts without any assumption. Let the network G_A|B=(Ṽ,Ẽ) defined from the initial graph G=(V,E) such that: Ṽ:=V⊔{𝕀,γ} and Ẽ:= E_Ã⊔ E_b ⊔ E_B̃ where the region E_Ã and E_B̃ are defined as: E_Ã :=_x∈ V_A(x,γ) E_B̃ :=_x∈ V_B(𝕀,x), where V_A and V_B denotes respectively all the vertices associated to the boundary region A and B. Moreover the vertices are valued in the permutation group 𝒮_n where: ∀ x∈Ṽ→α_x∈𝒮_n. Remark in the definition given above, the graph G_A|B is constructed in such a way all the half edges (x,·)∈ A are connected to γ=(n⋯ 1)∈𝒮_n and the half edges (x,·)∈ B are connected to 𝕀. Note also that in G_A|B there is no half edges, the bulk region in the network G_A|B remains the same as the one in the graph G. Let first consider the extended Hamiltonian H_G_A|B^(n)(α) of H_ G^(n)(α) in the network G_A|B given by: H_G_A|B^(n)(α):=∑_x ∈ V_A|γ^-1α_x|+∑_x∈ V_B|𝕀^-1α_x|+∑_(x,y)∈ V_b|α_x^-1α_y|, where each term in the new Hamiltonian is valued in the network G_A|B. Moreover, the sums in the above formula are over the vertices V_A, V_B and V_b are the vertices with the respective half edges in the region A, B and E_b. As was mentioned earlier, the flow approach will consist on analysing different paths that start from 𝕀 and ends in γ. This will induce a natural orientation of the network G_A|B, more precisely a poset structure. In the following, we will define the set of different paths in G_A|B and the edges' disjoint paths. Let 𝒫(G_A|B) be the set of all possible paths from the source to the sink in G_A|B, where the source and the sink in our case are the 𝕀 and γ respectively. Formally, the set of paths 𝒫(G_A|B) is defined as: 𝒫(G_A|B):={π_i: π_i: 𝕀→γ}, where {π_i}_i are all the paths connecting the 𝕀 to γ. Let 𝒫̃(G_A|B) the set of all disjoint paths in 𝒫(G_A|B), 𝒫̃(G_A|B):={π_i∈𝒫(G_A|B): {π_i}_i are edges disjoint } It is clear from the definition that 𝒫̃(G_A|B)⊆𝒫(G_A|B). Searching for different paths that starts from the 𝕀 and ends to γ will induce an ordering, more precisely a poset structure in the network G_A|B. First, we shall give in the following definition of a poset structure that will allow us later to use it in our maximal flow approach to minimize H_G_A|B^(n)(α). The poset structure 𝒫_o(G_A|B) is a homogeneous relation denoted by ≤ satisfying the following conditions: * Reflexivity: α_x≤α_x. * Antisymmetry: α_x≤α_y and α_y≤α_x implies α_x=α_y. * Transitivity: α_x≤α_y and α_y≤α_z implies α_x≤α_z. for all α_x,α_y,α_z∈Ṽ. Define the natural ordering as: 𝕀≤α_1≤α_2≤⋯≤α_n≤γ, for a path π_i∈𝒫(G_A|B) given by π_i:𝕀→α_1→α_2→⋯→α_n→γ. Another notion useful in our (maximal) flow analysis, is the permutation cluster. We define a permutation cluster of a given permutation α_x as all the edge-connected permutations to α_x. A permutation cluster [α_x] is defined as all the edge-connected permutations to α_x ∈𝒮_n. With the poset structure in G_A|B, we have a naturally induced ordering in the cluster structures for each connected permutations to the permutation elemnents {α_i}_i∈{x,y,z} where all the properties of the above definition can be extended to the cluster [α_i] of a given permutation α_i. More precisely the following holds: α_x≤α_y [α_x]≤ [α_y]. α_x=α_y [α_x]=[α_y]. α_x≤α_y≤α_z [α_x]≤ [α_y]≤ [α_z]. The maxflow in G_A|B is the maximum of all the edges disjoint paths in 𝒫̃(G_A|B): maxflow(G_A|B):=max{| 𝒫̃(G_A|B)| : s.t. the paths in 𝒫̃(G_A|B) are edge-disjoint}. In the following proposition, we will give a lower bound of the extended Hamiltonian H_G_A|B(α) which will be saturated when the maximal flow in G_A|B is achieved as shown in Proposition <ref>. Hastings uses similar ideas in <cit.> to lower bound the moments of a random tensor network map. Let α∈𝒮_n^|V| and 𝒫̃(G_A|B) be an arbitrary set of edge-disjoint paths in G_A|B, and set k:=|𝒫̃(G_A|B)|, the following inequalities holds: H_G_A|B^(n)(α)≥ k(n-1)+H_G_A|B∖_i∈[k]π_i^(n)(α)≥ k(n-1), where H_G_A|B∖_i∈[k]π_i^(n)(α) defined as: H_G_A|B∖_i∈[k]π_i^(n)(α):=∑_x∈ V_A∖_i∈[k]π_i|γ^-1α_x|+∑_x∈ V_B∖_i∈[k]π_i|𝕀^-1α_x|+∑_x∼ y∈ V_b∖_i∈[k]π_i|α_y^-1α_x|. One should mention that in the proposition above the sums are over β∖_i∈[k]π_i for β∈{V_A,V_B,V_b} which are the set of the different boundary and bulk regions when one takes off all the different edges and vertices that will contribute in different paths π_i∈𝒫̃(G_A|B) in G_A|B. Let the set of edge disjoint paths {π_i}_i∈[k]∈𝒫̃(G_A|B). Fix a path π_i for a given i∈[k] where: π_i:𝕀→α_x_1→α_x_2→⋯→α_x_n→γ, is a path that starts from 𝕀 and explores {x_i}_i∈[n] vertices and ends in γ. By using equation (<ref>), and using the path defined above one obtains: H_G_A|B^(n)(α)=|α_x_1|+∑_i=1^n-1|α_x_i^-1α_x_i+1|+|α_x_n^-1γ|+H_G_A|B∖π_i^(n)(α)≥ n-1+ H_G_A|B∖π_i^(n)(α), where we have used the triangle inequality of the Cayley distance and |γ|=n-1. The Hamiltonian H_G_A|B∖π_i^(n)(α) is the contribution when the path π_i from G_A|B is used. By iteration over all the edges disjoint paths {π_i}_i∈[k]∈𝒫̃(G_A|B) one obtains the desired result. The second inequality is obtained by observing that H_G_A|B∖π_i^(n)(α)≥ 0, ending the proof of the proposition. Given a graph G, there exist a tuple of permutations α such that H_G_A|B^(n)(α) = maxflow(G_A|B). By the celebrated max-flow min-cut theorem, the maximum flow in the network is equal to its minimal cut. Recall that a cut of a network is a partition of its set of vertices into two subsets S ∋ s and T ∋ t, and the size of the cut is the number of S-T edges. In our setting, the max-flow min-cut theorem (see e.g. <cit.>) implies that there exists a partition of the vertex set Ṽ of G_A|B (see <ref> into two subsets, Ṽ = S ⊔ T, with 𝕀∈ S and γ∈ T, such that maxflow(G_A|B) = | { (x,y) ∈Ẽ : x ∈ S and y ∈ T} |. Define, for x ∈ V, α_x = 𝕀 if x ∈ S γ if x ∈ T. Since 𝕀∈ S and γ∈ T, we have: H_G_A|B^(n)(α) = ∑_x ∈ V_A|γ^-1α_x|+∑_x∈ V_B|𝕀^-1α_x|+∑_(x,y)∈ V_b|α_x^-1α_y| = ∑_x∈ V_A x ∈ S|γ^-1α_x|+∑_x∈ V_B x ∈ T|𝕀^-1α_x|+∑_(x,y)∈ V_b x ∈ S and y ∈ T| α_x^-1α_y| = (n-1) [ |{ (x,·) ∈ A : x ∈ S}| + |{ (x,·) ∈ B : x ∈ T}| + |{ (x,y) ∈ E_b : x ∈ S and y ∈ T}| ] = (n-1)maxflow(G_A|B), where we have used in the last claim the fact that there are no edges between 𝕀 and γ in Ẽ, see <ref>. For all n ≥ 1, we have min_α∈𝒮_n^|V| H_G_A|B^(n)(α)=(n-1)maxflow(G_A|B). This follows from the two previous propositions. Once we identify and remove all the augmenting paths in the network G_A|B achieving the maximal flow, we obtain a clustered graph G_A|B^c by identifying different remaining connected permutations. The following example gives an illustration of the different steps described above to analyse the maxflow problem in the case of the tensor network represented in Figure <ref>. Figure <ref> represents the network G_A|B associated with the random tensor network from Figure <ref>. The vertices in the network are valued in the permutation group 𝒮_n. The network is constructed by adding two extra vertices γ and 𝕀 by connecting all the half edges in A to γ and the half edges in B to 𝕀. The flow approach induces a flow from 𝕀 to γ where the maximum flow in Figure <ref> is 4 where the augmenting paths achieving it are colored. By removing the four edge-disjoint augmenting paths we obtain the clustered graph G_A|B^c in Figure <ref> by identifying the remaining connected edges as a single permutation cluster, i.e 𝕀 with α_15 to form the cluster [𝕀,15]. In the limit D →∞, we have, for all n ≥ 1, lim_D→∞1/D^F(G_A|B)[((D^F(G_A|B)-|E_∂| ρ_A)^n)]=m_n, where m_n is the number of permutations achieving the minimum of the network Hamiltonian G_A|B^(n). These numbers are the moments of a probability measure μ_G_A|B which is compactly supported on [0, +∞). For fixed n, the convergence to m_n, the number of minimizers of the Hamiltonian G_A|B^(n), follows from <ref> and <ref>. The claim that the numbers (m_n)_n are the moments of a compactly supported probability measure follows basically from Prokhorov's theorem <cit.> (see also <cit.>). Indeed, note that, at fixed D, the quantity 1/D^F(G_A|B)[((D^F(G_A|B)-|E_∂| ρ_A)^n)] is the n-th moment of the empirical eigenvalue distribution of the random matrix D^F(G_A|B)-|E_∂| ρ_A, restricted to a subspace of dimension D^F(G_A|B) containing its support (this follows from the fact that D^F(G_A|B) is an upper bound on the rank of ρ_A, see <ref>). These measures have finite second moment, so the sequence (index by D) is tight. The limiting moments satisfy Carleman's condition since m_n ≤Cat_n^|V|, proving that the limit measure μ_G_A|B has compact support; recall that Cat_n ≤ 4^n is the n-th Catalan number, see <ref>. Since the matrix ρ_A is positive semidefinite, μ_G_A|B must be supported on [0,+∞). The obtained moments are given by a graph dependent measure. We will show in the following sections that such measures can be explicitly constructed if the partial order G_A|B^o is series-parallel (see Section <ref> and Theorem <ref> for more details). In all that we have described above, the contribution terms at large bond dimension D→∞ of (ρ_A^n) are the ones that minimise H_G_A|B^(n)(α). As we have shown in Proposition <ref> H_G_A|B^(n)(α) is minimized when the maximal flow is attained in G_A|B. For later purposes, if one wants to analyse the moment of ρ̃_A, one should also consider the contribution of the normalisation term of ρ̃_A at large bond dimension. We recall from Proposition <ref> the contribution of the normalisation term is given by: [(ρ_A)^n]=∑_α=(α_x)∈𝒮_n^|V| D^n|E| - n|E_b|-h_G^(n)(α), ∀ n∈ℕ where h_G^(n)(α):=∑_(x,·) ∈ E_∂|𝕀^-1_xα_x|+∑_(x,y)∈ E_b|α_x^-1α_y|. At large dimension D→∞, the contributed terms are given by the one that will minimize the extended Hamiltonian h_G_A|B^(n)(α) in G_A|B: h_G_A|B^(n)(α):=∑_x ∈ V_∂|𝕀^-1α_x|+∑_(x,y)∈ V_b|α_x^-1α_y|, where the first some is over all the vertices V_∂ with boundary edges, and V_b are the bulk vertices. Let h_G_A|B^(n)(α) the extended Hamiltonian in G_A|B. For all n≥ 1, we have: min_α∈𝒮_n^|Ṽ|h_G_A|B^(n)(α)=0, achieved by identifying all the permutations with 𝕀. To minimize the Hamiltonian h_G_A|B^(n)(α) we shall follow the same recipe where we connect all half edges A to γ and half edges B to 𝕀. However in h_G_A|B^(n)(α), all the boundary terms will be connected to 𝕀, hence no path starts from 𝕀 that ends in γ. By the bulk connectivity of G, the minimum is achieved by identifying all the permutations to 𝕀, therefore by Proposition <ref> we obtain the desired result. In Proposition <ref>, the Hamiltonian h_G_A|B^(n)(α) is obtained by tacking A=∅ in H_G^(n)(α) (see equation (<ref>)). One should mention if B=E_∂∖ A=∅ we will have the same form of the Hamiltonian h_G_A|B^(n)(α) where instead of all the half edges connected to 𝕀 they will be all connected to γ. Therefore one deduce that there is no paths that starts from 𝕀 and ends to γ, hence the minimum is 0 achieved by identifying all the permutations with γ. For any A⊆ E_∂ moments of the normalisation term converges to 1, more precisely: lim_D→∞((D^-|E_∂|ρ_A))^n=1. By tacking the average as was shown in Proposition <ref> one obtains the Hamiltonian h_G^(n)(α). By the maximal flow the Hamiltonian is minimised by identifying all the permutations to the 𝕀, therefore F(G_A|B)=0 as was shown in Proposition <ref>. Therefore by removing all the augmenting paths achieving the maximal flow the obtained residual graph is trivial with only two disjoint vertices γ and the identity cluster [𝕀]. Hence by Theorem <ref> one obtains the desired result. § MOMENT FOR ORDERED SERIES-PARALLEL NETWORK In this section, we will introduce the notion of a series-parallel graph. This notion will allow us to compute the moment as an explicit graph-dependent measure explicitly. More precisely we will show with the help of free probability in the case of the obtained partial order G_A|B^o is series-parallel the obtained graph-dependent measure is explicitly constructed. In Subsection <ref> we will introduce the notion of the series-parallel graph and the associated measures. In Subsection <ref> we will show in the case of a series-parallel partial order G_A|B^o the moments converge to moments of a graph-dependent measure. §.§ Series-Parallel graph In this subsection, we introduce the notion of the series-parallel partial orders which will allow us in the following subsection to explicitly compute the moments as graph dependent measures. We shall start first by recalling first the notion of series-parallel partial order <cit.> and giving some crucial definitions that will play an important role in all the rest of this section. Given two partial orders (P_i, ≤_i), i=1,2, one defines their series, resp. parallel, composition as follows. The base set is P:= P_1 ⊔ P_2 and the order relation is: x ≤ y if * x,y ∈ P_i and x ≤_i y or x ∈ P_1 and y ∈ P_2 in the series case; * x,y ∈ P_i and x ≤_i y in the parallel case. It is more convenient for us to represent partial orders by their covering graphs, where to a partial order (P, ≤) we associate an oriented graph G(V,E), with V=P and x → y ∈ E iff x <y and ∄ z s.t. x <z<y. We recall that we write x < y to denote x ≤ y and x ≠ y. The series and parallel composition for partial orders have an elegant interpretation in terms of directed graphs (or networks in this case). In what follows, we shall interchangeably use the terms partial order or partial order graph. <cit.> Let H_1 and H_2 two directed graph with there respective source s_i and sink t_i for i∈{1,2}. A series-parallel network is a directed graph G=(V,E) containing two distinct vertices s ≠ t ∈ V, called the source and the sink that can be obtained recursively from the trivial network G_triv = ({s,t}, {{s,t}}) using the following two operations: * Series concatenation: G=H_1 H_2 is obtained by identifying the sink of H_1 with the source of H_2, i.e t_1=s_2. * Parallel concatenation: G=H_1 H_2 obtained by identifying the source and the sink of H_1 and H_2, i.e. s_1=s_2 and t_1=t_2. Note that the parallel concatenation is a commutative operation, while the series concatenation is not, in general, commutative: ∀ G, H G H = H G in general G H ≠ H G. We shall associate from a given series-parallel network different probability distributions constructed from the paralllel and the series concatenation introduced in Definition <ref>. To a series-parallel network G we associate a probability measure μ_G, defined recursively as follows: * To the trivial network G_triv = ({s,t}, {{s,t}}), we associate the Dirac mass at 1: μ_G_triv := δ_1 * Series concatenation corresponds to the free multiplicative convolution of the parts, along with the measure : μ_G H := μ_G ⊠⊠μ_H * Parallel concatenation corresponds to the classical multiplicative convolution of the parts: μ_G H := μ_G ×μ_H. In the definition above we have used the free product convolution ⊠ and the Marc̆henko-Pastur distribution . We refer to the Appendix <ref> for a self-contained introduction to free probability theory. §.§ Moment as graph dependent measure In this subsection, with the help of the series-parallel notion introduced in the previous subsection, we will show the moments m_n in Theorem <ref> are explicitly constructed from a graph-dependent measure in the case of the obtained partial order G_A|B^o is series-parallel. Before we give the results of this subsection we recall first the different results obtained from the previous sections. From a given random tensor network as represented for an example in Figure <ref>, we have computed in Section <ref> the moment for a normalised quantum state to a given subregion A⊆ E_∂ of the graph (see Propositions <ref> and <ref>). We approached the evaluation of the moment as D→∞ by the maximal flow approach as analysed in Section <ref>. We have constructed from the graph G the network G_A|B by connecting each of the regions A and B respectively to γ and 𝕀. The flow consists of analysing the different paths starting from 𝕀 and ending in γ. By taking off all the different augmenting paths achieving the maximal flow a clustered graph G_A|B^c remains by identifying different edge-connected permutations. As represented in Figure <ref> for the clustered graph associated with the network G_A|B in Figure <ref>. With the maximal flow, we were able in Proposition <ref> which allows us to show the convergence of moments given by a graph dependent measure μ_G_A|B as shown in Theorem <ref>. Moreover from Proposition <ref> one deduce in Corollary <ref> that the normalisation terms converge to 1. From the clustered graph G_A|B^c, we will construct an partial order G_A|B^o where the vertices in G_A|B^o are the different permutation clusters. See Figure <ref> for the obtained partial order G_A|B^o to the network G_A|B in Figure <ref>. If the partial order G_A|B^o is series-parallel (see Definition <ref>), then we will explicitly show, in the following subsections, that we have a convergence in moments of ρ̃_A to an explicit partial order measure μ_G_A|B^o. The following theorem shows the convergence to a moment-dependent measure μ_G_A|B^o in case of obtained partial order G_A|B^o is series-parallel. For any A⊆ E_∂, and assuming the partial order G_A|B^o is series-parallel, then the limit measure from <ref> can be explicitly constructed from the partial order: μ_G_A|B = μ_G^o_A|B. In particular, the moments of the reduced tensor network matrix are given by: lim_D→∞1/D^F(G_A|B)((D^F(G_A|B)-|E_∂|ρ_A)^n) =∫ t^n dμ_G_A|B^o. All we need to show is that the numbers m_n,G_A|B^o are the moments of the probability measure μ_G_A|B^o. We shall prove this using the recursive structure of the series-parallel networks (see Definition <ref>) and that of the probability measure μ_G_A|B^o (see Definition <ref>). If the partial order G_A|B^o is trivial, it consists only of two connected components, that of the identity (the source) [𝕀] and that of the sink, [γ]. Hence, all the permutations associated to the connected components are fixed to be either 𝕀 or γ. We have thus m_n, G_A|B^o = 1 for all n ≥ 1, which are the moments of the measure μ_G_A|B^o = δ_1. This shows that the claim holds for the initial case of a trivial network. If the partial order G_A|B^o is the parallel concatenation of two networks G_A|B^o = H_1 H_2 having the same source and sink as G_A|B^o, the geodesic equalities for G_A|B^o are the disjoint union of the geodesic equalities for the vertices in H_1 and those for the vertices of H_2. This implies in turn that, for all n ≥ 1, m_n, G_A|B^o = m_n,H_1· m_n,H_2, since there is no geodesic inequality mixing vertices from H_1 with vertices in H_2. Hence, by the induction hypothesis, we have m_n, G_A|B^o = ∫ t^n dμ_H_1·∫ t^n dμ_H_2 = ∫ t^n d[μ_H_1μ_H_2] = ∫ t^n dμ_G_A|B^o, proving the claim for the parallel concatenation of networks. Finally, let us consider the case where the network is the series concatenation of two networks G_A|B^o = H_1 H_2. This means that there is a connected component, call it [β] which is common of the two networks, being the sink of H_1 and the source of H_2. All geodesic equality conditions for the H_1 are of the form 𝕀→α^(1)_1 →⋯→α^(1)_k_1→β, while those of H_2 are of the form β→α^(2)_1 →⋯→α^(2)_k_2→γ. In particular, the geodesic equality conditions for G_A|B^o = H_1 H_2 are of the form 𝕀→α^(1)_1 →⋯→α^(1)_k_1→β→α^(2)_1 →⋯→α^(2)_k_2→γ. The variable β is a non-constrained non-crossing partition of [n], and summing over it corresponds to taking the free multiplicative convolution with respect to the Marc̆henko-Pastur distribution: m_n,G_A|B^o = ∑_β∈(n) α^(1)_i ≤β≤α^(2)_j 1 = ∫ t^n dμ_H_1⊠⊠μ_H_2 = ∫ t^n dμ_H_1 H_2 = ∫ t^n dμ_G_A|B^o, proving the final claim and concluding the proof. As an example, let us consider the graph illustrated in <ref>. As we have described in the previous sections, the dominant terms of moments in Proposition <ref> are obtained by analyzing the maximum flow in G_A|B, given in <ref> where maxflow(G_A|B)=4. The partial order G_A|B^o, obtained by removing from G_A|B the edges that participate in the maximum flow is depicted in <ref>. Using the 4 augmenting paths (displayed in colors in <ref>), we construct the partial order on the connected components of the partial order, that we depict in <ref>. This process is fundamental in our approach, we give the details for one of these geodesics next. For example, consider the augmenting path 𝕀→ 1 → 2 → 3 → 4 → 13 → 6 → 10 →γ depicted in red in <ref>. Since in the clustered graph from <ref> the respective pair of points (𝕀, 15), and 14, γ are in the same connected components (clusters), this augmenting path gives rise to the following list of partial order relations: [𝕀, 15] ≼ [1] ≼ [2] ≼ [3] ≼ [4] ≼ [13] ≼ [6] ≼ [10] ≼ [14,γ]. The other three augmenting paths, depicted respectively in blue, green, and orange in <ref>, give rise to the following list of inequalities: [𝕀, 15] ≼ [1] ≼ [2] ≼ [5,12] ≼ [6] ≼ [13] ≼ [10] ≼ [11] ≼ [14,γ] [𝕀, 15] ≼ [7] ≼ [9,17] ≼ [10] ≼ [11] ≼ [14,γ] [𝕀, 15] ≼ [8,16] ≼ [9,17] ≼ [14,γ]. The partial order depicted in <ref> is compiled from the set of inequalities coming from the (fixed) list of augmenting paths yielding the maximum flow (here 4). Note that, importantly, some connected components (clusters) can be identified in this partial order, due to the anti-symmetry property x ≼ y and y ≼ x x=y; this happened in this example for the clusters [6] and [13]. As an application of Theorem <ref>, one can give explicit moments of the measure μ_G_A|B^o,A: lim_D→∞ D^-4[(D^-6ρ_A)^n]=m_n,G_A|B^o=∫ x^n dμ_G_A|B^o. The powers of D in the normalization follow from |E_∂| = 10 (see the boundary edges in <ref>) and from maxflow(G_A|B) = 4. The resulting probability measure μ_G_A|B^o associated to the partial order from <ref> is given by: μ_G_A|B^o = {[ ^⊠ 3⊠ (^⊠ 2×) ] ×[ (×) ⊠] }⊠^⊠ 2. The measure given above is obtained by the iterative procedure from <ref> as follows. First, observe that the graph in <ref> can be decomposed as a series composition of three graphs G_1 G_2 G3: hence, using μ_G_2 = μ_G_3 = δ_1, we have μ_G_A|B^o = μ_G_1⊠⊠μ_G_2⊠⊠μ_G_3 = μ_G_1⊠^⊠ 2. Observe now that G_1 is a parallel composition of two other graphs hence μ_G_1 = μ_G_4×μ_G_5. Let us now analyze separately G_4 and G_5. Firstly, G_4 can be decomposed as a series composition between the parallel composition of G_6 and G_7, and G_8: that is G_4 = ( G_6 G_7 ) G_8 μ_G_4 = ( μ_G_6×μ_G_7) ⊠⊠μ_G_8. Now, G_6 and G_7 are series compositions of two trivial graphs, so μ_G_6 = μ_G_7 =, while μ_G_8 = δ_1. We have thus μ_G_4 = ( ×) ⊠. Let us now turn to G_5, which can be decomposed as follows: that is G_5 = G_9 G_10( G_11 G_12) G_13. In terms of the associated probability measures, we have μ_G_5 = μ_G_9⊠⊠μ_G_10⊠⊠( μ_G_11×μ_G_12) ⊠⊠μ_G_13. Using iteratively series compositions, we have μ_G_11 = ^⊠ 2 and μ_G_12 = . We obtain μ_G_5 = ^⊠ 3⊠( ^⊠ 2×). Putting all these ingredients together, we obtained the announced formula for μ_G_A|B^o. In the example of the tensor network represented in Figure <ref> we were able to compute the moments from the factorised series-parallel thought the flow approach. One should mention if one take the minimal cut approach to the problem, there exist minimal cuts in the network represented in Figure <ref> do intersect, see <ref>. Therefore we can compute the correction terms of the entropy as the moment of a given measure without any minimal cut assumption considered in previous work. The obtained measure μ_G_A|B^o for a given ordered series-parallel graph G_A|B^o has a compact support where it combines the Marc̆henko-Pastur distribution with classical product measure and free product convolution constructed from the structure of G_A|B^o. § EXAMPLES OF SERIES-PARALLEL NETWORKS In this section we apply the results obtained previously for various random tensor networks having an induced series-parallel order. We start from simple cases and work our way towards more physically relevant cases. §.§ Single vertex network We start with the simplest possible case: a tensor network having only one vertex, no bulk edges, and two boundary half-edges, see <ref>. For this network, the associated random tensor Ψ_G ∈ℂ^D ⊗ℂ^D has i.i.d. standard complex Gaussian entries. The two boundary half-edges are partitioned into two one-element sets A and B = A̅. From this tensor, we construct the reduced matrix ρ_A := _B |Ψ_G⟩⟨Ψ_G| obtained by partial tracing the half-edge B. Note that in this very simple case, the matrix ρ_A can also be seen as a product of the matricization of the tensor Ψ_G with its hermitian adjoint, hence ρ_A is a Wishart random matrix (see <ref> for the definition and basic properties of Wishart matrices). In order to analyze the large D spectral properties of ρ_A, we first construct the network G_A|B, obtained by connecting all the half-edges in G that belong to A to a new vertex γ and those in B to a new vertex 𝕀. The flow analysis of this network is trivial: there is a unique path from 𝕀 to γ, hence the maximum flow is 1 and the residual network is empty (both edges in the network have been used for the construction of the unique maximum flow). Since there is a unique path achieving maximum flow and a single vertex in the network, the partial order induced by the path is very simple: 𝕀 - α_1 - γ. Hence, the only condition on the permutation α_1 ∈𝒮_n is that it should lie on the geodesic between the identity permutation 𝕀 and the full cycle permutation γ∈𝒮_n. We have thus a series network, see <ref> bottom right diagram. The limit moment distribution is , the Marc̆henko-Pastur distribution (of parameter 1). This matches previously obtained results about the induced measure of mixed quantum states (density matrices) <cit.>. Indeed, the matrix ρ_A can be interpreted in quantum information theory as the partial trace of the rank-one matrix in the direction of a random Gaussian vector Ψ_G ∈^d ⊗^d. Up to normalization, this random density matrix belongs to the ensemble of induced density matrices. The fact that the two factors of the tensor product have equal dimensions corresponds to taking the uniform measure on the (convex, compact) set of density matrices <cit.>. The statistics of the eigenvalues of such random matrices have been extensively studied in the literature. In particular, the asymptotic von Neumann entropy has been studied by Page <cit.>, who conjectured that S(ρ_A)= ∑_i=D+1^2D1/i - D-1/2D∼log D - 1/2 as D →∞. We refer to <ref> for a derivation of such statistics in the context of our work. §.§ Series network Let us now consider a tensor network consisting of s vertices arranged in a path graph, with two half-edges at the end points. We depict this network, as well as the various steps needed to compute the limiting spectrum distribution of the reduced matrix. The network associated to the graph (where the partition of the half-edges is clear) has a single path from the source to the sink, so the maximum flow is unity. The residual graph, obtained by removing the edges from the unique path achieving maximum flow, is empty. Hence, the partial order on the vertices is again a total order: 𝕀≼α_1 ≼⋯≼α_s ≼γ. We have thus a series network, and the final measure can be obtained by applying s times the series concatenation procedure from <ref> to obtain μ_G_A|B^o = ⊠⊠⋯⊠_s times = ^⊠ s. Let us note that very similar results were previously obtained by Cécilia Lancien <cit.>, see also <cit.>. This measure is commonly know as the Fuss-Catalan distribution of order s <cit.>, see also <ref>. Its moments are known in combinatorics as the Fuss-Catalan numbers: ∫ t^n d^⊠ s(t) = 1/sn + 1sn + nn and its entropy is <cit.> ∫ - t log t d^⊠ s(t) = ∑_i=2^s+11/i. Such tensor network states have already been considered in quantum information theory <cit.> §.§ 2D lattice We now discuss a physically relevant network: a rectangle that is part of a 2D lattice (part of ℤ^2). We have thus two integer parameters, the length L and the height H of the rectangle, and H · L vertices. The vertices are connected by the edges inherited from the ℤ^2 lattice, see <ref>. The left-most (resp. right-most) columns of vertices have half-edges that belong to the class B (resp. A) of the half-edge partition defining the two regions. The flow network corresponding to the graph and the partition A|B is depicted in <ref>, top diagram. The maximum flow in this network is H: one can consider H parallel horizontal paths which go from 𝕀 to γ. Note that the set of H edge-disjoint paths in the network achieving the maximum flow is unique. The residual network is non-empty in this case, with H clusters of the form 𝒞_j := {[i,j] : i =1, …, L}. The order relation on the clusters is again a total order on L points, see <ref>, bottom diagram. We are thus recovering again the Fuss-Catalan distribution: μ_G_A|B^o = ^⊠ L. § RESULTS FOR NORMALIZED TENSOR NETWORK STATES In this section, we will give our main technical contribution. With the help of all the results obtained from the previous sections, we will be able in this section to compute the Rényi and von Neumann entropy for a given approximated normalised state ρ̃_A:=D^-|E_∂|ρ_A associated to a given boundary subregion A⊆ E_∂. The main results of this section consist first on showing the weak convergence of moments associated to an approximated reduced state ρ̃_A associated with a given boundary region A in Theorem <ref>. Moreover we will show in Corollary <ref> the existence of correction terms as moments of a graph-dependent measure which can be explicitly computed in the case of an obtained series-parallel partial order G_A|B^o. In Subsection <ref> we will show different concentration inequalities, which will allows us in Subsection <ref> to give the main results of this section. §.§ Concentration In this subsection, we will give different concentration results that will allows us in the following subsection to give our main technical contribution. First, we recall the following theorem that estimates the deviation probability of polynomials in Gaussian random variables. This theorem will be relevant for different concentration results that we will proof in the rest of this subsection. Let g be a polynomial in m variables of degree q. Then, if G_1,⋯,G_m are independent centered Gaussian variables, ∀ t>0, ℙ(|g(G_1,⋯,G_m)- g|>t(Var(g))^1/2)≤exp(-c_q t^2/q), where V(g) is the variance of g(G_1,⋯,G_m) and c_q is a constant which depends only on q. Let G a bulk connected graph and let A⊆ E_∂ then: ℙ(|ρ̃_A-1|>ϵ)≤exp(-c_|E|ϵ^1/|E|D^|E_b|/2|E|), where ρ̃_A:=D^-|E_∂|ρ_A. First remark that ρ_A is a 2|E| polynomial in |g_x⟩∈ℋ_x, moreover we recall that for random Gaussian vector |g_x⟩∈ℋ_x one have: ∀ x∈ V, [|g_x⟩⟨g_x|]=𝕀_x and [|g_x⟩⟨g_x|^⊗ 2]=𝕀_x+F_x, where the 𝕀_x and F_x acts in all the edges of Hilbert space generating the local Hilbert space for each vertex x. Moreover, it is implicitly assumed that 𝕀_x≡𝕀_x^⊗ 2 and the Swap operator F_x is a unitary representation of permutation element in 𝒮_2. It is easy to check the variance Var(ρ̃_A) gives: Var(ρ̃_A)=[(ρ̃_A)^2]-([(ρ̃_A)])^2=O(D^-|E_b|), where we have used that: [(ρ̃_A)^2] =(⊗_e∈ E_b|Ω_e⟩⟨Ω_e|^⊗ 2⊗_x∈ V[|g_x⟩⟨g_x|^⊗ 2]) =1+D^|E_∂|∏_e∈ E_b(|Ω_e⟩⟨Ω_e|^⊗ 2 F_e)∏_e∈ E_∂(F_e) =1+O(D^-|E_b|), where in the last equality the bulk contribution is of D^-|E_b| while the boundary edges contribute with D^|E_∂|. The second term of the variance is : [(ρ̃_A)]=(⊗_e∈ E_b|Ω_e⟩⟨Ω_e|⊗_x∈ V[|g_x⟩⟨g_x|])=1. By combining the variance Var((ρ̃_A)) with Proposition <ref> one have: ℙ(|ρ̃_A-ρ̃_A|>ϵ)≤exp(-c_|E|ϵ^1/|E|D^|E_b|/2|E|), where we have defined ϵ:=t(D^-|E_b|/2) with c_|E|> 0 is a constant depending only in the total number of edges |E|. Let G a bulk connected graph and let A⊆ E_∂ we have: ∀ n>1, ℙ(|1/D^F(G_A|B)(σ_A^n)-1/D^F(G_A|B)[(σ_A^n)]|>ϵ)≤exp(-c_2n|E|D^1/2n|E|ϵ^1/n|E|), where σ_A:=D^F(G_A|B)ρ̃_A. The proof of this proposition follows the same proof spirit of the proposition above. Remark that σ_A^n is a 2n|E| polynomial in |g_x⟩. Moreover the variance was estimated in <cit.> where: Var(1/D^F(G_A|B)(σ_A^n))=O(1/D). By defining ϵ:=tD^-1/2 we obtain the desired result. §.§ Entanglement entropy In this subsection we will introduce the main technical contribution of this work. With the help of concentration results, we will first assume and work with the approximate normalised state ρ̃_A:=D^-|E_∂|ρ_A. We will show that as D→∞ one can compute the average Rényi and von Neumann entanglement entropy with correction terms. In particular if the obtained partial order is series-parallel, the correction terms will be given as moment of an partial order dependent measure μ_G_A|B^o. We recall first from Subsection <ref> that the rank of the approximate normalised state is upper bounded by D^F(G_A|B). Let consider the restricted approximate normalised quantum state ρ̃_A to its support and its empirical measure μ^(D)_A defined as: σ_A:=D^F(G_A|B)ρ̃_A^S, and μ_A^(D):=1/D^F(G_A|B)∑_λ∈spec(σ_A)δ_λ, where ρ̃_A^S is the reduced approximate normalised state restricted on its support. The definition of ρ̃_A and the empirical measure μ_A^(D) will allow us to show in Theorem <ref> the weak convergence of μ_A^(D) to μ_G_A|B. In particular if the obtained partial order G_A|B^o is series-parallel from Theorem <ref> one will have weak convergence to μ_G_A|B. This result will allow us in Corollary <ref> to compute the Rényi and von Neumann entanglement entropy. Recall first, that a measure μ^(D) converges weakly to a measure μ if for any continuous function f:→ we have: ∀ϵ >0, lim_D→∞ℙ(|∫ f(t)dμ^(D)(t)- ∫ f(t)dμ(t)|≤ϵ)=1. Let boundary region A⊆ E_∂ in the graph G. The empirical measure μ_A^(D) associated to the approximated normalised state σ_A converges weakly to μ_G_A|B. More precisely for all continuous function f:→ we have: ∀ϵ >0, lim_D→∞ℙ(|∫ f(t)dμ^(D)_A(t)- ∫ f(t)dμ_G_A|B(t)|≤ϵ)=1. As was shown in Theorem <ref> the moment converges to a unique measure μ_G_A|B. In the particular case of an ordered series-parallel graph G_A|B^o we have an explicit graph dependent measure μ_G_A|B^o. Recall from Theorem <ref> that: 1/D^F(G_A|B)[(σ_A^n)]m_n=∫ t^n dμ_G_A|B(t). From standard probability theory results the convergence in probability implies weak convergence (see <cit.>. For that one needs only to show the decreasing scaling of the variance as D→∞. By using <cit.> that: Var(1/D^F(G_A|B)(σ_A^n))=O(1/D), (D→∞), hence the weak convergence of μ_A^(D) to μ_G_A|B, in particular if the graph is series-parallel we have μ_G_A|B^o. Let boundary region A⊆ E_∂ and let m_n^(D) the moment associated to the empirical measure μ_A^(D) one have: ℙ(|log(m_n^(D))-log( m_n^(D))|>ϵ)1 where m_n^(D):=1/D^F(G_A|B)[(σ_A^n)]. By Proposition <ref> and Jensen's inequality that log(m_n^(D))≤log( m_n^(D)). All what remains to show that log(m_n^(D))≥log( m_n^(D)) holds with high probability. Fix ϵ>0. From Proposition <ref> we know that m_n^(D)≥ m_n^(D)-δ with 0<δ≤ϵ/ϵ+1 m_n^(D), holds with probability 1-exp(-c_2n|E|D^1/2n|E|δ^1/n|E|). It is easy to check that the following inequalities hold: log(m_n^(D))≥log( m_n^(D)-δ) =log( m_n^(D))+log(1-δ/ m_n^(D)) ≥log( m_n^(D))-δ/ m_n^(D)-δ≥log( m_n^(D))-ϵ. Therefore we have that log(m_n^(D))≥log( m_n^(D))-ϵ occurs with probability at least 1-exp(-c_2n|E|D^1/2n|E|δ_max^1/n|E|) where δ_max=ϵ/ϵ+1 m_n^(D). As D→∞, m_n^(D) converges, hence δ_max=O(1), showing that the probability estimate above converges to 1 and finishing the proof. We recall for completeness the following proposition from <cit.> which will play a key role for the proof of our main result. <cit.> Let f be a continuous function on with polynomial growth and ν_n a sequence of probability measures which converges in moments to a compactly supported measure ν. Then ∫ fdν_n→∫ f dν. Let boundary region A⊆ E_∂ in G, and let ρ̃_A the approximated reduced normalised state. Then the averaged Rényi and von Neumann entropy converges weakly as D→∞ are given by: F(G_A|B) log D - S_n(ρ̃_A) 1/n-1log(∫ t^n dμ_G_A|B), F(G_A|B) log D- S(ρ̃_A) ∫ t log t dμ_G_A|B. where F(G_A|B):=maxflow(G_A|B) . The poof of this corollary is a direct consequence of different obtained concentration results from the previous subsection and the weak convergence of μ^(D)_A to μ_G_A|B. First, we shall start with the Rényi entropy, for that let consider: F(G_A|B) log D - S_n(ρ̃_A)=1/1-nlog(m_n,A^(D)), where m_n^(D):=1/D^F(G_A|B)[((σ_A)^n)], and recall that σ_A:=D^F(G_A|B)ρ̃_A^S restricted on the support of ρ̃_A:=D^-|E_∂|ρ_A. By using Lemma <ref> and in the limit D→∞ we have: F(G_A|B) log D - S_n(ρ̃_A)1/n-1log(∫ t^n dμ_G_A|B). For the von Neumann entropy let consider {λ_i}∈spec(σ_A) and {λ̃_i}∈spec(ρ̃_A), it is direct that: S(ρ̃_A)=-∑_iλ̃_ilog(λ̃_i)=F(G_A|B)log(D)-1/D^F(G_A|B)∑_iλ_ilog(λ_i). Define the function f:→ as f(t):=tlog t, by combining Proposition <ref> and Theorem <ref> we have the following weak convergence as D→∞ F(G_A|B)log D- S(ρ̂_A)=1/D^F(G_A|B) (∑_i f(λ_i))∫ f(t) dμ_G_A|B, where the measure μ_G_A|B is defined on a compact support, ending the proof of the corollary. In the particular case if the obtained poset structure G_A|B^o is series parallel the obtained graph dependent measure is explicitly given μ_G_A|B=μ_G_A|B^o by Theorem <ref>. § CONCLUSION From a given graph general graph with boundary region and bulk region, the main goal of this work is to compute the entanglement entropy, the Rényi and the von Neumann entropy, of a given sub-boundary region A of the graph. By analysing as D→∞ the moments of a state associated to the region A, with the help of the (maximal) flow approach we computed the leading terms contribution to the moment. By analysing and removing all the augmenting paths starting from 𝕀 and ending in γ of the network G_A|B constructed by connecting the region A to the total cycle γ and 𝕀 to the region B one obtains a cluster graph G_A|B^c by identifying all the remaining edges connected permutations. The flow approach induces a natural ordering poset structure represented by the induced poset order G_A|B^o. The maximal flow approach allows us to deduce the moment convergence to the moment of a unique graph-dependent measure μ_G_A|B. This result allows us to deduce the higher order correction terms of the Rényi and von Neumann entropy given by a graph-dependent measure μ_G_A|B. Moreover, we have shown if the obtained partial order G_A|B^o is series-parallel, and with the hep of free probability theory we can explicitly give the associated graph-dependent measure μ_G_A|B=μ_G_A|B^o that will contribute to the higher order correction terms of each of the Rényi and von Neumann entanglement entropy. In this work, we did not assume any assumption on the minimal cuts, in the maximal flow approach by duality one can obtain different minimal cuts which may intersect in different edges. Moreover, the higher-order correction terms in the entanglement entropy can describe the quantum corrections beyond the area law behaviour of the expected Ryu-Takayanagi entanglement entropy in the context of ADS/CFT. It was previously argued in the literature that if one wants to consider higher-order correction terms in the random tensor network setting one needs to go beyond the maximally entangled state and consider general link states representing the bulk matter field. In this work the obtained higher-order quantum fluctuation of entanglement entropy with only maximally entangled states that we interpret as fluctuations of spacetime itself without any need of bulk fields represented by a generic link state. Acknowledgments. We would like to thank Cécilia Lancien for sharing with us preliminary notes on very similar questions. The authors were supported by the ANR projects https://esquisses.math.cnrs.fr/ESQuisses, grant number ANR-20-CE47-0014-01, and https://www.math.univ-toulouse.fr/ gcebron/STARS.phpSTARS, grant number ANR-20-CE40-0008, as well as by the PHC program Star (Applications of random matrix theory and abstract harmonic analysis to quantum information theory). K.F. acknowledges support from a https://nanox-toulouse.fr/NanoX project grant. alpha § BASICS OF THE COMBINATORIAL APPROACH TO FREE PROBABILITY THEORY In this section, we will recall and give the necessary material on combinatorics and free probability needed to understand the rest of this section. All the material that we shall introduce is standard and can be found in <cit.>. Let π:={V_1,⋯,V_n}[Do not confuse with π_i introduced in Section <ref> representing the different paths.] be a partition of a finite totally ordered set S such that _i∈[n]V_i=S. We call {V_i} the blocks of π. We denote by p∼_πq if p and q belongs to the same block of π. A partition π of a set S is called crossing if there exists p_1<q_1<p_2<q_2 in S such that p_1∼_πp_2_πq_1∼ q_2. We called a non-crossing partition if π is not crossing. We note by (S) the non-crossing partition set of S. In particular if S={1,⋯ n}, we denote the non-crossing partition by (n). The set of non-crossing partition plays a crucial in different areas from combinatorics <cit.> to random matrices and free probability theory which will be our main focus. Moreover one should mention a crucial result <cit.>: there exists a one-to-one correspondence of the non-crossing partition set and the set of permutations α in a geodesic between γ and 𝕀 i.e |α|+|α^-1γ|=|γ|. Another important fact, the cardinality |(n)|=Cat_n where: Cat_n:=1/n+12nn, are the Catalan numbers. For more combinatorial details and properties of the Catalan numbers and the non-crossing partitions see <cit.>. Assume (α_1,⋯,α_k) k tuples of permutations in 𝒮_n such that |α_1|+∑_i∈[k-1]|α_i^-1α_i+1|+|α_k^-1γ|=|γ|. are geodesics between 𝕀 and γ. The cardinality of the set of the k tuple permutations satisfying the geodesic equation (<ref>) known as the Fuss-Catalan numbers given by: FC_n,k:=1/nk+1n+nkn, generalizing the Catalan numbers for k=1. Now we are ready to introduce the free probability theory tools that will be used in this work. Moreover, one should mention the intrinsic link between free probability theory and combinatorics where we will give some examples to illustrate it. The combinatorics will allow us in the rest of this section to understand our main result. We recall that a non-commutative probability space is a pair (𝒜,ω) of a unital C^*-algebra 𝒜 with a state state ω:𝒜→ such that ω(1_𝒜)=1. One says that the elements a∈𝒜 define a noncommutative variable. In the non-commutative probability space, one can associate the distribution law μ_a to a∈𝒜 which is defined as μ_a=ω(a). Before we give some concrete examples of some non-commutative probability spaces, we shall recall the notion of freeness that plays a crucial role in the non-commutative probability world. The notion of freeness generalizes the “classical" independence when the algebra 𝒜 is commutative. We say that for a given n non-commutative random variables {a_i}∈𝒜 are free independent if for any polynomials {p_i} the following holds: ω(a_1a_2⋯ a_n)=0 whenever ω(p_k(a_i_k))=0 for k∈[n] and two no adjacent indices i_k and i_k+1. One can check that with the definition of free independence one has for given two free independent variables a_1 and a_2: ω((a_1-ω(a_1))(a_2-ω(a_2))=ω(a_1a_2)-ω(a_1)ω(a_2)=0, hence, generalizing the notion of standard independence in the commutative setting where 𝔼(a_1a_2)=𝔼(a_1)𝔼(a_2) for two commutative random variables a_1,a_2 in a commutative probability space. Let (𝒜_N,ω_N) with N∈ℕ and (𝒜,ω) non-commutative probability spaces. We say that a_N∈𝒜_N converges weakly to a∈𝒜 as N→∞ if the following holds: lim_N→∞ω_N((a_N)^n)=ω(a^n) ∀ n∈ℕ, where ω(a^n)=∫ x^n dμ_a(x) are the moments of a. To illustrate concrete non-commutative probability spaces, we give some classical examples. The first example we shall deal with is “classical" probability space corresponding to commutative algebra. For that let (Ω,Σ,μ) where Ω a set, Σ a σ-algebra, and μ probability measure. Define 𝒜:=L^∞-(Ω,μ) where: L^∞-(Ω,μ):=⋂_1≤ k<∞L^k(Ω,μ), and the state ω as: ω(a):=∫_Ωa(x)dμ(x), a∈𝒜. The tuple (𝒜,ω) defines a commutative probability space. Another standard example that can be considered is the random matrices case. Let us consider the algebra 𝒜 consisting of valued k× k matrices over L^∞-(Ω,μ) where 𝒜:=ℳ_k(L^∞-(Ω,μ)). Define the state ω on 𝒜 as: ω(a):=∫_Ω tr(a(x))dμ(x), a∈𝒜, where tr(·) is the normalized trace. The space (𝒜,ω) define a non-commutative probability space which the space of random matrices over (Ω,Σ,μ). We recall for a given non-commutative random variable a∈𝒜, the nth moments of a are given by m_n(μ_a):=∫ t^n dμ_a(t). Moreover for a given random variables {a_1,⋯,a_n} in 𝒜, the moments are given by ω(a_1⋯ a_n):=∑_π∈(n)κ_π(a_1,⋯,a_n), where κ_π are the free cummulants. The equation given above is known as the moments-cummulants formula, where the free independence can be characterized by the vanishing of mixed cumulants (see <cit.>). In free probability theory, for two free independent random variables a_1,a_2∈𝒜, one can define a “convolution operation". Mostly in this work, we only shall deal with the free multiplicative convolution. Let a_1 and a_2, two free independent random variables in 𝒜 with their respective distribution μ_a_1 and μ_a_2. A free multiplicative convolution or simply a free product is defined by μ_a_1a_2:=μ_a_1⊠μ_a_2, where μ_a_1a_2 represents the distribution of a_1 a_2. There exists a standard and analytical way to compute the free product, like for the free additive convolution, which can be done via the S-transform. The S-transform of a probability distribution μ_a is defined as: S_μ_a(z):=∫1/x-zdμ_a(x), which is analogous to the R-transform for the free additive convolution, as we shall describe. Moreover, it can also computed equivalently by the formal inverse of the moment-generating formal power series given by: S_μ_a(z)=1-z/zM_a^-1(z), where M_a^-1(z) is the formal inverse of the moment-generating formal power series given by M_μ_a(z):=∑_k=1^∞ m_k,a z^k. With the help of the S-transform, one can compute the free product where: S_μ_a_1a_2(z)=S_μ_a_1(z) S_μ_a_2(z)=S_μ_a_1⊠μ_a_2(z). In the following, we shall recall some standard distributions that are well-known in the literature and will be highly used in this work. The first distribution we shall consider here is the semicircular law, it is one of the most important distributions we encounter in free probability theory. The semicircular distribution μ_SC(x) is defined by the density: dμ_SC(x):=√(4-x^2)/2π 1_x∈[-2,2]dx. For an illustration, and by standard computation, one can compute the S-transform of the semi-circular distribution: S_μ_SC(z)=-z+√(z^2-4)/2. The first link that can be made, is the moments of the semicircular distribution are intrinsically related to the Catalan numbers. One can easily check that the following equality holds: ∫ x^kdμ_SC(x)=Cat_k, where Cat_k are the Catalan numbers see equation (<ref>), and Chapter 2 in <cit.> for more details. Moreover, one should say that the S-transform gives another important link between free probability and combinatorics by computing the moments of free product convolution of Marc̆henko-Pastur distribution (see Theorem <ref>). Another well-known, due to Wigner <cit.> shows the following result linking the semicircular distribution and random Gaussian matrices. <cit.> Let N∈ℕ, let A_N be an N× N selfadjoint random Gaussian matrix. Then A_N converges weakly to a semicircular distribution μ_SC(x). We refer to <cit.> for a complete proof. As we have shown in this particular case the existence of a deep link between random Gaussian matrices, the semicircular law, and the Catalan numbers. Moreover, the semicircular law plays an important role in free probability theory as a free central limit distribution. We recall one of the main results in free probability theory, see Theorem 8.10 in <cit.>. Let (𝒜,ω) a non-commutative probability space and a_1,⋯,a_N∈𝒜 free independent and identically distributed self-adjoint random variables. Assuming that ω(a_i)=0 for i∈[N] and denote by σ^2:=ω(a_i^2) the variance of the random variables a_i. Then the following holds: a_1+⋯+a_N/√(N)→μ_SC(x). converges weakly to μ_SC(x) as N→∞. Where s is a semicircular of variance σ^2. With this particular distribution, we have shown how random matrices, combinatorics, and free probability theory can be related. In the following, we will give another example of distribution that will play an important role in this work. The second distribution we shall consider is the Marc̆henko-Pastur distribution. We shall denote by (t) defined by: (t) :=max(1-t,0)δ_0+ν_t, dν_t(x) :=√(4t-(x-1-t)^2)/2π x 1_(x-1-t)^2≤ 4tdx. Recall the Marc̆henko-Pastur distribution is deeply related to Wishart matrices. Let Z a Whishart matrix defined Z:=1/m YY^*, where Z∈_nm() where the entries of Y∈_nm() are complex random Gaussian variables. It was shown by Marc̆henko and Pastur that the empirical distribution of Whishart matrices converges to the (t) defined above. More precisely they have shown the following theorem: Consider a Whishart matrix Z, and let μ_n,m its empirical distribution given by: μ_n,m:=1/n∑_z∈spec(Z)δ_z, Assuming that n/m converges to t as n→∞. Then μ_n,m converges (weakly) to (t) with t>0. For a proof and more detailed statement of this result, we refer to Theorem 3.6 and Theorem 3.7 from <cit.>. In particular, what will an important role in this work is the , where the distribution is: d:=1/2π√(4x^-1-1) dx, where we have used the shorthand notation instead of (1). As for the semicircular distribution described previously, one can relate the moments of free convolution products of to Fuss-Catalan numbers. We shall only give some relevant results for the Marc̆henko-Pastur distribution to be as concise as possible, we refer to <cit.> for more details and proofs. Let (t) the Marc̆henko-Pastur distribution. The S-transform is given by: S_(t)(z)=1/t+z. One of the main results of <cit.>, relates the free product convolution of (t) and combinatorics, in particular, we shall only give the result for that will be relevant for this work. Let the Marc̆henko-Pastur distribution. Let ^⊠ s with s≥ 2, then we have ∫_ x^n d^⊠ s= FC_n,s, where FC_n,s are the Fuss-Catalan numbers.
http://arxiv.org/abs/2407.02483v1
20240702175823
MMedAgent: Learning to Use Medical Tools with Multi-modal Agent
[ "Binxu Li", "Tiankai Yan", "Yuanting Pan", "Zhe Xu", "Jie Luo", "Ruiyang Ji", "Shilong Liu", "Haoyu Dong", "Zihao Lin", "Yixin Wang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Characterizing the Interpretability of Attention Maps in Digital Pathology Tomé Albuquerque0000-0003-3246-7206 Anil Yüce0000-0003-2688-1873 Markus D. Herrmann0000-0002-7257-9205 Alvaro Gomariz0000-0002-6172-5190 July 8, 2024 ============================================================================================================================================= § ABSTRACT multi-modal Large Language Models (MLLMs) have achieved promising results in the medical field. However, these models are usually limited to specific modalities and tasks. In this work, we propose MMedAgent, a general-purpose agent that can activate a wide range of large vision, vision-language, language pre-trained models (tools) based on users' inputs. To acquire the ability to select tools, we first collect and curate an instruction-based dataset that covers various modalities and 6 tasks. We further fine-tune the grounding tool given the lack of available models and the importance of this task. Experiments show the superiority of MedAgent. Multi-Modal Large Language Models (MLLMs), despite being successful, exhibit limited generality and often fall short when compared to specialized models. Recently, LLM-based agents have been developed to address these challenges by selecting appropriate specialized models as tools based on user inputs. However, such advancements have not been extensively explored within the medical domain. To bridge this gap, this paper introduces the first agent explicitly designed for the medical field, named Multi-modal Medical Agent (MMedAgent). We curate an instruction-tuning dataset comprising six medical tools solving seven tasks, enabling the agent to choose the most suitable tools for a given task. Comprehensive experiments demonstrate that MMedAgent achieves superior performance across a variety of medical tasks compared to state-of-the-art open-source methods and even the closed-source model, GPT-4o. Furthermore, MMedAgent exhibits efficiency in updating and integrating new medical tools. § INTRODUCTION Multi-modal Large Language Models (MLLMs) have made considerable progress across diverse tasks with inputs from different medical imaging modalities (, Magnetic Resonance Imaging, Computed Tomography, X-ray) in healthcare, including Visual Question Answering (VQA) <cit.>, image classification <cit.>, image segmentation <cit.>, and Medical Report Generation (MRG) <cit.>, etc. Despite these advancements, MLLMs often exhibit limitations in seamlessly solving multiple tasks across different medical imaging modalities. Although recent large medical models <cit.> have attempted to address this challenge, they remain limited to handling a narrow range of tasks across a restricted set of imaging modalities and cannot be efficiently extended to new tasks or more imaging modalities. Furthermore, these generalists typically do not provide expert-level responses comparable to those of specialized MLLMs customized for specific tasks. One way to address this issue is to build an AI Agent, an AI system driven by Large Language Models (LLMs) that integrates various domain expert models as tools. Such a system can understand user instructions, make decisions, and select the appropriate tools to execute any specific task, thereby generating expert-level responses for any given request <cit.>. Despite the significant success of AI agents in the general image domain <cit.>, no such agents currently exist in the medical domain. Although several works <cit.> in the medical field use the term “agent” in their methods, they focus on utilizing LLMs to play various roles and collaborate on complex tasks, in which an “agent” refers to a specific role. Several works in the medical field use the term “agent” to define their methods, but their focus is on utilizing LLMs to play various roles and collaborate on complex tasks <cit.>, in which an agent refers to a specific role. For example, MedAgent <cit.> develops a multi-agent framework employing GPT-4 <cit.> as foundational elements for multi-agent communication. AgentClinic <cit.> is introduced to evaluate multiple language agents, each designed to fulfill specific roles within a simulated clinical environment. In our work, we refer to “agent” as a centralized MLLM-based medical AI system that excels in all types of tasks, not just language-based ones. Consequently, our method can solve many important tasks in the medical field that require imaging data, such as medical image segmentation, disease detection, and MRG. In this work, we aim to build the first AI agent specifically for the medical domain, termed as Multi-modal Medical Agent (MMedAgent). We choose LLaVA-Med <cit.> as the backbone and aim to extend its capability to handle various language and multi-modal tasks, including grounding, segmentation, classification, grounding, MRG, and Retrieval-Augmented Generation (RAG). The first step to building MMedAgent is to collect the state-of-the-art (SOTA) methods for each task, hereafter referred to as “tools”. During this phase, we identify a lack of an effective tool for the grounding task, prompting us to fine-tune Grounding DINO <cit.> specifically for medical applications. Next, we build an instruction-based dataset that teaches the agent to select the proper tool(s) when encountering a user instruction and aggregate the outputs from tools to reply to users precisely and comprehensively. The core of our approach involves an end-to-end training regimen through visual instruction tuning <cit.>. MMedAgent has demonstrated promising results in various aspects. When evaluated on several complex medical tasks, MMedAgent significantly outperforms current open-source SOTA methods, LLaVA-Med <cit.> and RadFM <cit.>, and even surpasses close-source method, GPT-4o <cit.>, on average. It also enhances the backbone's, , LLaVA-Med, original capability in the VQA task, as well as exhibits efficient capability in learning new tools. Our agent is inspired by LLaVA-plus <cit.>, an MLLM-based AI Agent in the general vision-language field. Specifically, MMedAgent consists of an MLLM that knows how to select the proper tools and integrate the output and a collection of medical tools. MMedAgent supports various medical tasks, including disease, organ, and cell grounding (a.k.a., detection), medical image segmentation, medical image classification, medical report generation, and enhanced retrieval and generation of medical manuals (a.k.a., Retrieval-augmented generation). Compared to current state-of-the-art multi-modal medical LLMs such as LLaVA-Med <cit.> and RadFM <cit.>, MMedAgent has achieved excellent results in performing complex medical tasks. MMedAgent even surpasses GPT-4o <cit.> in organ grounding, disease grounding, and MRG tasks. In order to train MMedAgent, we build an instruction-based dataset by first collecting publicly available data with various modalities in different medical tasks and then curating the instructions based on GPT-4o. This agent is designed to process both textual and visual medical data. The core of our approach involves an end-to-end training regimen that enhances the capabilities of MLLMs through visual instruction tuning. Our contributions can be summarized as: * We propose MMedAgent, the first multi-modal medical AI Agent incorporating a wide spectrum of tools to handle various medical tasks across different modalities seamlessly. * We build the first open-source instruction tuning dataset for multi-modal medical agents. * Adaptive multi-modal medical tools are incorporated into our Agent. We develop specialized datasets to adapt existing grounding and segmentation tools to the medical domain. * Extensive experiments demonstrate that MMedAgent surpasses previous SOTA multi-modal medical language models across a range of tasks. § RELATED WORK §.§ Medical MLLMs LLMs present fertile new ground for research that pushes the frontier of the medical domain. Unlike natural domains, the intrinsic complexity of medical data, which includes multiple sources and modalities, has led most LLMs in the medical field to focus on narrowly defined tasks using language and text alone. Singhal et al. <cit.> curate MultiMedQA, a benchmark for medical question-answering datasets, and propose Med-PaLM, which utilizes instruction prompt tuning tailored to medical domains based on PaLM <cit.>. Med-PaLM performs encouragingly on the axes of the human evaluation framework. Recent progress on LLMs has been made on multi-modal conversational capability <cit.>. Owing to the diversity inherent in medical data and tasks, LLMs have initially been localized to specific imaging domains such as X-ray <cit.>, CT <cit.>, and histology <cit.>, or tailored for different tasks such as segmentation <cit.> and medical report generation <cit.>. In contrast, generalist models expand these capabilities by enabling a single LLM to cover a wider range of imaging modalities and tasks by enlarging the pre-training datasets greatly <cit.>. Pretrained and fine-tuned with multi-modal and multitask biomedical datasets, BiomedGPT <cit.> exhibits competitive performance against medical AI models across five tasks. LLaVA-Med leverages a large-scale multi-modal biomedical dataset from PubMed with instructions tuned from GPT-4 to finetuning a LLaVA <cit.> model. The recently proposed generalist AI models Med-Gemini <cit.>, inherited from Gemini, claimed to achieve state-of-the-art performance on 14 medical benchmarks through self-training and an inference time uncertainty-guided search strategy. BiomedParse <cit.> utilizes GPT-4 to harmonize the noisy, unstructured textual information with established biomedical object ontologies that can jointly solve segmentation, detection, and recognition tasks across all major image modalities. Although generalist models are capable of handling a wide range of medical modalities and tasks, they face limitations in scalability when incorporating additional skills and lack specialization in specific tasks. §.§ AI Agent A multi-modal AI Agent is a system that achieves users' general-purpose goals by perceiving the environment and making decisions based on the perceptions <cit.>. Recent works utilize LLMs as planners to understand multi-modal input from environments and make decisions to call different tools to achieve goals. Based on whether the LLM is open source or not, <cit.> classifies multi-modal AI Agents into two types: (i) closed-source LLMs as planners, which utilize prompt technique to enable LLMs to make decisions <cit.>; (ii) fine-tuned LLMs as planners, where an LLM is fine-tuned to understand instructions, make decisions, and call tools/APIs <cit.>. Our MMedAgent belongs to the second type. Multi-modal AI Agents have achieved great success in various applications. For example, <cit.> apply agents to control the website or user interface. Some works <cit.> focus on robotics or embodied AI which applies multi-modal LLMs to perceive and interact with real environments. Most works concentrate on multi-modal understanding, editing, or generation, especially image, video, or audio <cit.>. However, these works are limited to the natural domains, leaving the applications in the medical domain unexplored, which is particularly challenging due to its diverse modalities and tasks. To the best of our knowledge, we are the first to address this challenge and build a system that integrates these varied medical applications. § MMEDAGENT Multi-modal Medical Agent (MMedAgent), a system based on an MLLM, is designed to seamlessly manage diverse medical tasks by integrating various open-source medical models. MMedAgent comprises two components: (1) an instruction-tuned multi-modal LLM that functions as an action planner and results aggregator, and (2) a collection of medical tools tailored to the agent, each targeting specific tasks in the medical domain. We first present the fundamental workflow of MMedAgent in Section <ref>, followed by a description of creating an instruction-tuning dataset for training the multi-modal LLM as an action planner in Section <ref>. The details of medical tasks and corresponding tools incorporated in MMedAgent are described in Section <ref>. §.§ Workflow Following LLaVA-Plus <cit.>, MMedAgent is built to learn to utilize a wide range of multi-modal medical tools, extending the MLLMs' capabilities to analyze and accomplish various medical tasks. As shown in Figure <ref>, the workflow consists of four parts: (1) users provide an instruction X_q and a medical image I_q; (2) MLLM works as an action planner, which understands X_q and I_q and then generates a formatted instruction X_tool to call a specific tool. (3) The tool is executed given I_q and the output X_result of the tool is sent to the MLLM. (4) The MLLM aggregates the output with X_q and I_q and generates the final answer X_answer to users. We train the agent end-to-end with an auto-regressive objective on the generated sequence - X_tool and X_answer to enable the model to use correct tools and answer questions based on the tool's results. §.§ Instruction Tuning In order to ensure MMedAgent simultaneously performs as both action planner and results aggregator, we adopt the unified dialogue format proposed by <cit.>, illustrated in Figure <ref>. Specifically, upon receiving a user's input, MMedAgent generates three components in its outputs: (1) , which determines whether MMedAgent can independently solve the user's instructions or if external tools are required, and if so, identifies the appropriate tool; (2) , which enumerate a list of API calls necessary to execute the . This comprises two sub-fields: and . If the action list is null, no API call is initiated. (3) , which provides a natural language response aggregated by the MLLM along with the outputs from the involved tools. As depicted in Appendix Figure <ref>, we construct the instruction data by querying GPT-4o through one-shot learning, presenting an example that demonstrates the input and output of MMedAgent. We set a fixed instruction prompt for each tool and select several examples as conversation templates ( and in Appendix Figure <ref>). The tool processes the to generate the instruction data from the dialogue. §.§ Medical Tasks and Tools Our MMedAgent possesses the capability to access a diverse array of tools with the scalability to handle various tasks. As shown in Table <ref>, we integrate six tools that encompass seven representative tasks in medical domains, , (1) grounding, (2) segmentation with bounding-box prompts (Segmentation), (3) segmentation with text prompts (G-Seg), (4) medical imaging classification, (5) Medical Report Generation (MRG), (6) retrieval augmented generation (RAG), and (7) VQA. Note that no additional tools are required for the VQA task since we utilize LLaVA-Med, which originally supports it. Each tool functions as a specialist, exhibiting exceptional proficiency in executing a specific task across various medical imaging modalities. §.§.§ Grounding Grounding, also known as detection, aims to identify and localize specific objects within an input image by generating the coordinates of bounding boxes containing the objects. To the best of our knowledge, no existing medical models can simultaneously process images from different modalities. Consequently, we propose a generalized grounding tool tailored for the medical domain. Specifically, we choose to fine-tune Grounding DINO <cit.>, an open-set object detector, to the medical imaging field. Our first step is to collect multiple medical image segmentation datasets, including FLARE2021 <cit.>, WORD <cit.>, BRATS <cit.>, Montgomery County X-ray Set (MC) <cit.>, VinDr-CXR <cit.>, and multi-modal cell segmentation dataset (Cellseg) <cit.>. As detailed in Appendix Table <ref>, these datasets target different modalities, organs, or diseases, each including the original imaging along with their corresponding pixel-level segmentation annotations. These segmentation masks are further transformed into bounding boxes by extracting the minimal outer rectangle around each object. The coordinates of the bounding boxes and the corresponding object labels are then recorded as the grounding labels in each dataset. Based on the released pre-trained weights, we fine-tuned the Grounding DINO with the dataset described above as well as two common datasets in the natural image field, , COCO <cit.> and Flickr30k <cit.>, to maintain model's ability in detecting common objects. Lack of connection with the tasks mentioned in the first paragraph in Section 4. Tiankai- Is there anyway to revise the start of "grounding task" to reflect such transition? Here is one example: Grounding, serving as organ or disease detection, aims to identify a certain organ or disease within ... Grounding Task. Grounding aims to identify and localize a certain object within an input image by generating the coordinates of bounding boxes containing the objects. The grounding tool in our Med-Agent is designed to localize different types of organs or lesions from a wide range of medical imaging modalities. Commonly defined tasks include detecting abdominal organs in Abdomen CT images, brain tumors in multi-modal brain MRIs, chest lesions in Chest X-Ray, and localizing cells in histology images. To the best of our knowledge, there are no existing LLMs capable of handling such heterogeneous images and tasks simultaneously. In our work, we reconstructed a multi-modal, multi-source, and multi-task dataset and built upon Grounding DINO <cit.>, an open-set object detector, to develop a generalized grounding tool for the medical domain. To adapt Grounding DINO to the medical domain, We collected multiple medical image segmentation datasets, including FLARE2021 <cit.>, WORD <cit.>, BRATS <cit.>, Montgomery County X-ray Set (MC) <cit.>, VinDr-CXR <cit.>, and multi-modal cell segmentation dataset (Cellseg) <cit.>. As detailed in Table <ref>, these datasets target different modalities, organs, or diseases, each including the original imaging along with their corresponding pixel-level segmentation annotations. These segmentation masks are further transformed into bounding boxes by extracting the minimal outer rectangle around each object. The coordinates of the bounding boxes and the corresponding object labels are then recorded as the grounding labels in each dataset. [I think we can remove this subsection and just say fine-tuning DINO with the newly collected dataset. Tiankai- you are referring to grounding dino? Is it necessary to mention architectures to some extent, and also mention "released pre-train" and coco/flickr30k?] Grounding Dino. Grounding DINO <cit.> features a dual-encoder-single-decoder architecture, comprising an image backbone, a text backbone, a feature enhancer for cross-modality feature fusion, a language-guide query selection, and a cross-modality decoder. Given the inputs of labels and images, Grounding DINO would be able to detect the bounding boxes and corresponding labels with the largest scores. Based on the released pre-train weights, we fine-tuned the Grounding DINO with the grounding dataset described above. Besides, we also fine-tuned it with COCO <cit.>, Flickr30k <cit.> datasets to train the model's detection in common objects. §.§.§ Other Tasks Segmentation involves identifying and delineating the region of interest (ROIs) of an image. In our scenario, we consider interactive segmentation when a bounding box that covers the ROIs is provided. This setting has become popular since the development of Segment Anything (SAM) <cit.>. We select MedSAM <cit.>, which fine-tunes SAM to the medical field, as our tool. The prompts are limited to bounding boxes because they provide more precise guidance to SAM <cit.>. Specifically, in this scenario, we consider the users to provide the position of the bounding box in which MedSAM can be directly applied to obtain the ROI masks. G-Seg refers to combining grounding with SAM. It aims to address a more common scenario when users specify only a particular object to segment in an image. In this case, we first activate the fine-tuned grounding tool to localize the referred object and then provide its location, in box format, to MedSAM. Classification aims to identify the most appropriate category for a medical image within a closed set. Specifically, we define a closed set of labels L, including organ types, common image modalities, and complex modalities such as ultrasound imaging, hematoxylin, and eosin histopathology. The details of the set L are shown in Appendix <ref>. We adopt BiomedCLIP <cit.>, which exhibits superior performance in zero-shot and fine-grained classification. The image is classified based on the cosine similarity between the image embedding and each text embedding. MRG involves creating accurate and authentic medical reports from provided medical information or imaging. MMedAgent incorporates ChatCAD <cit.>, an open-source tool designed for generating medical reports for chest X-ray images. The model was trained on the MIMIC-CXR dataset <cit.> and can provide reports with detailed radiographic analyses, identifying chest-related conditions such as cardiomegaly, edema, consolidation, atelectasis, etc. RAG refers to enhancing the generated outputs by incorporating the most relevant information acquired from external data sources. We select ChatCAD+ <cit.> to implement the medical retrieval process. ChatCAD+ retrieves information from a medical dictionary containing detailed descriptions of 1972 diseases and medical procedures, including their introduction, symptoms, diagnosis, treatment, and causes, sourced from the Merck Manual <cit.>, a professional medical reference. Given the users' input, the model searches for medical entrees that share the highest cosine similarity with the encoded message and retrieves the relevant knowledge from the medical dictionary. § EXPERIMENTAL SETTINGS MMedAgent is initialized by LLaVA-Med 60K-IM, instruction-tuned using LoRA <cit.> for 15 epochs, and conducted over approximately 72 hours on two 80G NVIDIA A100 GPUs. The rank of LoRA is set to 128, and the training batch size is set to 48. We employ AdamW <cit.> as the optimizer alongside a cosine learning rate schedule peaking at 2e-4. We generate 48K instruction-tuning data, consisting of 15K augmented VQA instruction following the method from LLaVA-Plus <cit.> derived from 60K inline mentions <cit.>, 10K data points for detection, 3K for RAG, 5K each for segmentation, classification, MRG, and G-Seg. Data sources are shown in Table <ref>. § EXPERIMENTALS We conduct experiments on MMedAgent to answer three research questions: (1) What is the performance of MMedAgent in addressing diverse medical tasks across various modalities (Section <ref>)? (2) Does the instruction-tuned MMedAgent exhibit superior performance in open-ended biomedical dialogue (Section <ref>)? (3) What is the efficiency of MMedAgent in invoking tools or incorporating new tools (Section <ref>)? §.§ Various Medical Tasks §.§.§ Evaluation Criterion To evaluate the performance of MMedAgent on various complex medical tasks, we create an evaluation dataset consisting of 70 diverse questions. For this dataset, we initially select 10 concepts randomly from the Merck Manual for RAG and 60 unseen images of different tasks from respective data sources. These include 10 images each for organ grounding, disease grounding, and cell grounding, along with 20 X-ray images for MRG and 10 images across various modalities for classification. Notably, the VQA task evaluation is shown in Section <ref>. Due to the inability to describe the segmentation task linguistically, we provide the qualitative results shown in Section <ref>. Then we utilize the same prompt as outlined in Section <ref> to generate the instruction-tuning data for evaluation. Subsequently, we separately feed the data into GPT-4o, MMedAgent and other benchmarks to obtain the outputs. GPT-4o is a newly released multimodal model with strong visual understanding capabilities. According to the testing from OpenAI, it surpasses GPT-4 Turbo and has a faster inference speed. Thus, the output from GPT-4o can be viewed as a strong benchmark. All the outputs will be assessed by GPT-4 and rated on a scale from 1 to 10 based on their helpfulness, relevance, accuracy, and level of details. We provide GPT-4 with figure captions and include inline mentions from 60K-IM for the VQA task. The detailed prompts are illustrated in Figure <ref>. For the MRG task, the reports are taken as captions of the input figures. For detection and other tasks without a caption in the original data, we generate the captions by combining the images with the labels. For instance, “A CT scan showing the kidney organ.”. Since the scores are generated by an LLM, their rank better reflects the capability rather than the absolute values. Based on the output from GPT-4o, we propose a relative score, defined as S_* / S_GPT-4o (%), to indicate the performance change caused by other MLLMs. Here, S_* refers to the score of outputs generated by *, with * ∈ { RadFM, LLaVA-Med, MMedAgent}. A higher score indicates a superior output quality. During the evaluation, MMedAgent dynamically selects, activates, and executes tools in real-time, then aggregates the obtained results from these tools to answer questions. §.§.§ Experimental Results As illustrated in Table <ref>, MMedAgent significantly outperforms all other baselines on various tasks. Notably, the overall score of MMedAgent is 1.8 times higher than that of LLaVA-Med. We also consider LLaVA-Med (Tool in Test), an enhanced version of LLaVA-Med that incorporates the internal output of tools. MMedAgent maintains its superior performance in this case. Furthermore, the scores for organ grounding, disease grounding, and MRG exceed 100%, indicating that MMedAgent surpasses GPT-4o in these tasks. These results underscore the superior efficiency of MMedAgent in diverse medical tasks across various modalities. §.§.§ Case Study A detailed visual comparison between LLaVA-Med and MMedAgent is illustrated in Figure <ref>. Given the user queries on tasks involving analyzing the images, such as classification, grounding, and segmentation tasks, LLaVA-Med only generates simple conversational responses without solving the given requests (highlighted in Red) and it is unable to generate visualized results. In contrast, MMedAgent effectively addresses these questions by activating the appropriate tools, integrating their outputs, generating accurate responses (highlighted in Green), and visualizing the results. This is guaranteed by the precise selection of tools by MMedAgent and the superiority of the tools themselves. When encountering language generation-based tasks, , MRG and RAG, LLaVA-Med fails to provide an in-depth analysis of the images. However, MMedAgent provides more straightforward and accurate responses by utilizing the tools designed specifically for these tasks. §.§ Open-ended Medical Dialogue To evaluate the capability of visual question-answering tasks, we follow the setting of open-ended medical dialogue in LLaVA-Med <cit.>. Here, we use the same test data as LLaVA-Med, which consists of 193 novel questions and 50 unseen images from PMC-15M <cit.>. This dataset contains 5 modalities and can be divided into two main classes: conversation questions and detailed description questions. We also utilize the relative score, introduced in Section <ref>, as the evaluation criterion. Since this is a pure language task, we select the output from GPT-4 rather than GPT-4o as the reference score. As shown in Table <ref>, performance is categorized by either question types (conversation and description) or image modalities (X-ray, MRI, Histology, Gross and CT). After instruction-tuning on the tool learning dataset, MMedAgent performs better on both types of questions. Moreover, MMedAgent outperforms LLaVA-Med in all domains but MRI. This demonstrates the efficiency of MMedAgent in open-ended medical dialogue. §.§ Tool Utilization The superior performance of MMedAgent on the various tasks described above depends on accurately understanding users' inputs and activating the correct tools. After training MMedAgent for 15 epochs, the tool selection accuracy reached 100%, demonstrating MMedAgent's ability to select the appropriate tools without errors. One significance of MMedAgent is its ability to adapt to new tools. Here, we consider two scenarios. Firstly, when a superior tool for tasks that MMedAgent is already equipped to handle becomes available, the API name of the outdated tool can be seamlessly replaced with that of the new tool, eliminating the need for additional retraining. Secondly, to extend MMedAgent to a new task, it is sufficient to generate a small set of instruction-tuning data for this specific task and fine-tune the agent accordingly, rather than retraining it from the beginning. To verify this capability, we simulate a new tool called “Pseudo Tool”, generate an additional 5K instruction-tuning data (following Section <ref>), and create 30 unseen diverse questions for evaluation following Section <ref>. We utilize the same training settings to fine-tune MMedAgent with a smaller learning rate of 1e-6 and a batch size of 10 on one 80G A100 GPU. As shown in Figure <ref>, the accuracy of selecting a new tool increase to 100% within 2K steps without damaging the performance on selecting old tools. § CONCLUSION We propose MMedAgent, the first multi-modal medical AI agent that is capable of seamlessly utilizing various medical tools to handle a broad spectrum of medical tasks across different imaging modalities. We create an instruction-tuning dataset that MMedAgent utilize to learn to invoke various medical tools and aggregate results from tools. Comprehensive experiments demonstrate that MMedAgent significantly outperforms open-source baselines and even surpasses GPT-4o across many medical tasks. Furthermore, MMedAgent efficiently integrates with new tools while remaining the capability to activate previously learned tools. § LIMITATION Our work is currently limited to seven tasks across five modalities. Due to the need for extensive domain knowledge, and the complexity and diversity of medical datasets involved in medical tasks, more specialized tools are emerging that should be included in our tools lists. However, the scalability of our model allows for the inclusion of more powerful tools in the future. Additionally, more ablation studies on different backbones are necessary. Our current backbone is based on the LLaVA-Med, but recently, multiple generalist LLMs in the medical domain have been proposed, which could potentially be used to build a stronger MMedAgent. § DETAILS OF TOOLS §.§ Classification We construct a close set of labels L for BiomedCLIP to search for the most suitable category for the given image. L ={“adenocarcinoma histopathology”, “brain MRI”, “covid line chart”, “squamous cell carcinoma histopathology”, “immunohistochemistry histopathology”, “bone X-ray”, “chest X-ray”, “pie chart”, “ultrasound imaging“, “hematoxylin and eosin histopathology”, “gross”}. §.§ Retrieval Augmented Generation (RAG) RAG distinguishes itself from standard report generation by its access to an external knowledge base, such as Merck Manual. We consider the following 3 common uses of RAG. The instruction-tuning data are generated based on these functionalities. * Chest X-ray image report analysis. The chest X-ray image report analysis can function to analyze the report on medical images and provide an analysis including the potential diseases and their related retrieved knowledge and source. * General medical report analysis. The general medical report analysis can take a summarized report on common diseases and generate an analysis with medical advice such as treatments and precautions, together with a link to the retrieved source from the Merck Manual official website. * General medical advice generation. For general medical advice generation, the user can ask general questions about the diseases, and the model will retrieve and provide related information on them. For the chest X-ray image report analysis, we generate 1000 chest X-ray reports from the MRG tool described in Section <ref> as the report dataset. For the datasets of general medical report analysis and general medical advice generation, we utilize GPT-4o to generate 1000 medical reports and 1000 patient questions respectively about common diseases sampled from the entrees covered in the Merck Manual. §.§ Medical Grounding DINO The datasets used to tune the medical grounding DINO is shown in Table <ref>. § INSTRUCTION TUNING DATASET GENERATION We represent our prompts for generating an instruction tuning dataset in Figure <ref>. § AGENT SERVING MMedAgent operates within the FastChat system , which consists of web servers that interact with users, model workers hosting the language model, and various tools. A controller coordinates the activities between the web servers and model workers . The entire system, including the 7B MMedAgent and all associated tools, can be run on an Nvidia A100 (80GB) GPU. § EVALUATION PROMPT We utilize GPT-4 to assess the answers generated by MMedAgent and other models with prompts shown in Figure <ref>.
http://arxiv.org/abs/2407.02671v1
20240702211912
When Do Natural Mediation Effects Differ from Their Randomized Interventional Analogues: Test and Theory
[ "Ang Yu", "Li Ge", "Felix Elwert" ]
stat.ME
[ "stat.ME", "stat.AP" ]
Zero-Bit Transmission of Adaptive Pre- and De-emphasis Filters for Speech and Audio Coding Niloofar Omidi Piralideh, Philippe Gournay, Roch Lefebvre Department of Electrical and Computer Engineering University of Sherbrooke Québec, Canada {Niloofar.Omidi.Piralideh, Philippe.Gournay, Roch.Lefebvre}@USherbrooke.ca July 8, 2024 ======================================================================================================================================================================================================================================== § ABSTRACT In causal mediation analysis, the natural direct and indirect effects (natural effects) are nonparametrically unidentifiable in the presence of treatment-induced confounding, which motivated the development of randomized interventional analogues (RIAs) of the natural effects. The RIAs are easier to identify and widely used in practice. Applied researchers often interpret RIA estimates as if they were the natural effects, even though the RIAs could be poor proxies for the natural effects. This calls for practical and theoretical guidance on when the RIAs differ from or coincide with the natural effects, which this paper aims to address. We develop a novel empirical test for the divergence between the RIAs and the natural effects under the weak assumptions sufficient for identifying the RIAs and illustrate the test using the Moving to Opportunity Study. We also provide new theoretical insights on the relationship between the RIAs and the natural effects from a covariance perspective and a structural equation perspective. Additionally, we discuss previously undocumented connections between the natural effects, the RIAs, and estimands in instrumental variable analysis and Wilcoxon-Mann-Whitney tests. § INTRODUCTION §.§ Background Causal mediation analysis explains the mechanisms of a total causal effect by decomposing it into direct and indirect effects in terms of some mediators. The direct effect is the part of the total effect that does not go through the researcher-specified mediators, and the indirect effect is the part that does. As a central task in the social and health sciences, causal mediation analysis is widely used in applied research. We adopt the conventional notation in causal mediation analysis. Y is the observed outcome, A is a binary treatment with support {0,1},[This is generalizable to any pair of two values for a multivalued treatment.] and M is a vector of mediators. Y_a and M_a are respectively the potential values of Y and M under the assignment of treatment value a. We further define two groups of confounders that may be empty, C is a vector of baseline confounders, and L is a vector of post-treatment confounders. Figure <ref> illustrates the relationship between variables, when any variable may affect any temporally subsequent variables. The most classic approach of causal mediation analysis decomposes the total effect (TE) into the natural indirect effect (NIE) and the natural direct effect (NDE) <cit.>. (Y_1-Y_0)_TE = (Y_1,M_1-Y_0,M_0)_TE= (Y_1,M_1-Y_1,M_0)_NIE + (Y_1,M_0-Y_0,M_0)_NDE, where Y_a,M_a' denotes the potential outcome of Y under the assignment of treatment a and the mediator value that would be realized under the assignment of treatment a'. The NIE is defined by varying the mediator assignment from M_1 to M_0 but otherwise fixing treatment assignment at 1, capturing the part of the total effect that only goes through M. The NDE is defined by varying the treatment assignment from 1 to 0 but holding mediator assignment at the baseline mediator value, capturing the part of the total effect that does not go through M. Since the natural effects (NIE and NDE) are defined in terms of individual-level potential mediators (M_1 and M_0), they capture causal mechanisms at the individual level, even though they are ultimately summarized as population-average effects through the expectation operator. The natural effects are notoriously difficult to identify. Without parametric assumptions, they are unidentifiable when there exists any treatment-induced confounder L, regardless of whether L is observed <cit.>. This severely limits the application of the natural effects in practice, as ruling out L is impossible in most empirical settings. Motivated by the difficulty of identifying the natural effects, an alternative decomposition has been proposed <cit.>. Nonparametrically, its identification does not require the absence of treatment-induced confounders. This alternative decomposition is based on the randomized interventional analogues (RIA) of the TE, the NIE, and the NDE, namely the TE^R, the NIE^R, and the NDE^R: (Y_1,G_1-Y_0,G_0)_TE^R= (Y_1,G_1-Y_1,G_0)_NIE^R + (Y_1,G_0-Y_0,G_0)_NDE^R, where G_a' is a value randomly drawn from the mediator distribution that would realize under the assignment of treatment value a' given C, and Y_a, G_a' is the potential outcome of Y under the assignment of the treatment value a and the mediator value G_a'. Clearly, the RIAs differ from the natural effects in mediator assignments. Instead of M_1 and M_0, the mediator assignments for the RIAs are G_1 and G_0. As G_1 and G_0 are random draws from population distributions, the RIAs are not aggregations of individual-level causal contrasts like the natural effects. Seen as much less demanding and more widely applicable than the natural effects, the RIAs are popular in empirical research.[ Methodological work that further develops the RIA decomposition includes <cit.>.] In practice, applied researchers frequently estimate the RIAs as proxies of the natural effects. In fact, the RIA estimates are often interpreted as if they were estimates of the natural effects (after all, the RIAs are named as “analogues”!). <cit.> reviewed 16 applied studies that estimate RIAs, all of which contain interpretive statements that elide the difference between the RIAs and the natural effects. Indeed, the methodological literature has encouraged this ambiguity. For example, <cit.> write that “it will only be in extremely unusual settings that the interventional analogue is non-zero, with there being no natural indirect effects”. However, there are reasons to suspect that the RIAs can be poor proxies of the natural effects. Unlike the natural effects, they are not interpretable as explanatory mechanisms at the individual as opposed to the population level. Formalizing this intuition, <cit.> proposes a set of null criteria that valid causal indirect effect measures should satisfy and shows that the NIE is valid by these criteria while the NIE^R is not. In particular, the NIE^R can be nonzero even if the mediator does not “mediate” the treatment effect for any individual. In addition, it has been frequently noted in the methodological literature that the NIE^R and the NDE^R do not generally sum to the TE, which is problematic because the canonical task of causal mediation analysis is to understand the TE <cit.>. Beyond the violation of null criteria, which focuses on a knife-edge scenario, we draw attention to possible quantitative differences between the natural effects and the RIAs in a wide range of data generating processes (DGPs). These quantitative differences may be large and even involve sign reversal. In the illustration of Figure <ref>, data are simulated according to a set of very simple and seemingly innocuous DGPs. By varying one parameter of the DGP, we observe areas of significant divergence and sign reversal, where the RIAs can hardly be used to draw conclusions about the natural effects. Therefore, it is natural to ask when the natural effects differ from their RIAs. If they are identical or at least close to each other, then it might be warranted to interpret estimates of the RIAs as the natural effects. Conversely, if they substantially differ, then more caution and precision in interpretation is called for. In this paper, we answer this question with one practical test and two theoretical perspectives. §.§ Contributions We make a practical contribution by proposing a novel test for the differences between the NIE, the NDE, and their respective RIAs. The empirical testability of these differences may be surprising, because under the standard assumptions for identifying the NIE and the NDE, the natural effects necessarily coincide with their RIAs <cit.>. And under the standard assumptions identifying the NIE^R and the NDE^R, the NIE and the NDE are unidentified. Thus, it may appear that under no set of common assumptions can one test the differences. However, our test is made possible by leveraging two simple facts. First, the TE and the TE^R are identified under the standard assumptions for the NIE^R and the NDE^R. Second, when TE -TE^R≠ 0, it is necessarily the case that either NIE≠NIE^R or NDE≠NDE^R. Hence, instead of hoping that “the natural and interventional effects may coincide empirically” <cit.>, we can actually test their divergence by testing TE -TE^R=0 under weak identifying assumptions that are sufficient for the RIAs but not the natural effects. We make a theoretical contribution by clarifying and illustrating the substantive conditions under which the natural effects differ from or coincide with their RIAs. We do so from a nonparametric covariance perspective and a structural equation perspective. First, we derive a covariance-based representation of the differences between the natural effects and their RIAs. Second, we derive parametric constraints on the structural equations generating the data under which the the natural effects will coincide with the RIAs. These two novel perspectives provide intuitive insights on the substantive mechanisms underpinning the relationship between the natural effects and the RIAs. In Miles' () discussion of the relationship between NIE and NIE^R, he proves the null criteria violation using one specific numerical counterexample. With two new analytic perspectives that are general and intuitive, we thus demystify and expand on Miles' () results. The remaining of this paper is organized as follows. In Section 2, we review some standard assumptions in causal mediation analysis that were referred to above. In Section 3, we present our empirical test for the differences between the natural effects and the RIAs and apply it to the Moving to Opportunity (MTO) study. Section 4 and 5, respectively, introduce the covariance perspective and the structural equation perspective. Section 6 discusses related estimands, including those in the instrumental variable (IV) settings and those underlying the Wilcoxon-Mann-Whitney tests. We present novel results that unify causal mediation analysis with these other fields of causal inference. Technical proofs are collected in the appendix. R code for simulating Figure <ref> and empirical data analysis in Section <ref> can be found at https://github.com/ang-yu/diff_naturals_riashttps://github.com/ang-yu/diff_naturals_rias. § REVIEW OF CONVENTIONAL MEDIATION ASSUMPTIONS We review conventional assumptions included in the literature of causal mediation analysis. [Consistency] f(M_a | a,C)=f(M | a,C) and (Y_a,m| a,m,C)=(Y | a,m,C), for all a and m. [Ignorability of A conditional on C] Y_a,m A | C for all a and m; M_a A | C for all a. [Ignorability of M conditional on C,A,L] Y_a,m M | C, A=a, L for all a and m. [Ignorability of M conditional on C,A] Y_a,m M | C, A=a for all a and m. [Cross-world Independence] Y_a,m M_a'| C for all a, a', and m. Assumption <ref> is a standard statement linking the potential values to the observed values. Assumption <ref> requires the treatment A be ignorable conditional on baseline confounders C. Assumption <ref> states that the mediator M is conditionally ignorable given both baseline confounders C and post-treatment confounders L, as well as the treatment. Assumption <ref> imposes conditional ignorability of the mediator given only baseline confounders and the treatment, which is stronger than Assumption <ref>. Finally, Assumption <ref> requires the conditional independence between the potential outcomes Y_a,m and potential mediators M_a' under two possibly different treatment assignments a and a', hence its name (cross-world independence). In the literature, Assumptions <ref>, <ref>, and <ref>, are the standard identifying assumptions for the RIAs <cit.>, while Assumptions <ref>, <ref>, <ref>, and <ref> are the standard assumptions for identifying the NIE and the NDE (Pearl, ; VanderWeele, , p.463-4; Imai, Keele, and Yamamoto, for a slightly stronger version). Notably, the cross-world independence assumption requires the absence of any post-treatment confounding of the mediator-outcome relationship (L=∅) <cit.>. Hence, it is clear that the standard assumptions for the RIAs are weaker, as they allow for the existence of post-treatment confounders. Furthermore, when the cross-world independence assumption holds, the natural effects are necessarily equivalent to their RIAs. § EMPIRICAL TEST We propose to use the empirical estimate of TE -TE^R as a test statistic for the divergence between the NIE and the NIE^R and the divergence between the NDE and the NDE^R. This test relies on the fact that if TE -TE^R≠ 0, it is necessarily the case that either NIE≠NIE^R or NDE≠NDE^R, or both. Thus, if we reject the null hypothesis that TE - TE^R = 0, we also reject the null hypothesis that NIE = NIE^R and NDE = NDE^R.[In a recent work, <cit.> constructs a test using a similar premise. In the context of path-specific effects, they propose using the difference between the TE and the sum of a set of RIA-type path-specific estimands to test the absence of intermediate confounding.] In addition, since |TE - TE^R|≤|NIE - NIE^R| + |NDE - NDE^R| by the triangle inequality, |TE - TE^R| also provides a lower bound for the sum of the absolute differences between the NIE and the NIE^R and between the NDE and the NDE^R. Under assumptions <ref>, <ref> and <ref>, TE -TE^R=(Y_1)-(Y_0)-(Y_1,G_1) + (Y_0,G_0) is identified by the functionals below <cit.>. (Y_a) = ∬ y f(y | c,a) f(c) y c (Y_a,G_a) = y f(y | c,a,l,m) f(m | c,a) f(l | c,a) f(c) y m l c. Hence, importantly, our test is nonparametrically identifiable when there are treatment-induced confounders and Assumption <ref> is invalid. This is because although the NIE and the NDE are not nonparametrically identifiable under treatment-induced confounding, their sum is. The task now is to estimate TE -TE^R. This can be done using various estimators of TE and TE^R. Below, we discuss a nonparametric approach based on the efficient influence functions (EIF) <cit.>. The EIF for (Y_a) is well-known <cit.>: ϕ_a (A=a)/(A=a | C) [Y-(Y | C,a)] + (Y | C,a) - [(Y | C,a)| a]. The EIF for (Y_a,G_a) is derived by <cit.>. Let (Y | C,A,L,M)=μ(C,A,L,M) and ξ(C,A)=∬μ(C,A,l,m) f(m | C,A) f(l | C,A) m l, this EIF is ψ_a (A=a)/(A=a | C) {(M | C, a)/(M | C, a, L)[Y-μ(C,a,L,M)] . .{.+ ∫μ(C,a,L,m)f(m | C,a) m - ξ(C,a) . .{.+ ∫μ(C,a,l,M)f(l | C,a) l - ξ(C,a) } - [ξ(C,a)]. Then the EIF for TE -TE^R is ϕ_1- ϕ_0 + ψ_1 -ψ_0. Using these EIFs, we can construct either double machine learning <cit.> or targeted maximum likelihood <cit.> estimators that allow nonparametric estimation with desirable theoretical properties <cit.>. In particular, these estimators are data-adaptive and can handle high-dimensional covariates. They are also multiply robust to misspecification of components of the EIF. Furthermore, using cross-fitting, they attain semiparametric efficiency and asymptotic normality under relatively weak conditions. In practice, researchers may take advantage of the output of medoutcon package <cit.> in R to estimate TE -TE^R and the associated confidence interval.[Another R package, HDmediation <cit.>, may also be useful. medoutcon only accommodates a single binary L variable but directly outputs individual-level EIF estimates, which allows the calculation of p value and confidence interval. On the other hand, HDmediation accommodates vector-valued and non-binary L but does not directly outputs EIF estimates.] However, it is hard to develop a general-purpose statistical package based on the EIFs for all numbers and types of M and L variables, which require different nuisance parameter models. §.§ Parametric Estimation We assume the following parametric models[As an alternative to model (LM | c,a), one could use a model for (L^2 | c,a) or (L | c,a).]. [Need vector/matrix version later] (L | c,a) = α_0 + α_1 c + α_2 a (M | c,a) = γ_0 + γ_1 c + γ_2 a (LM | c,a) = γ_0 + γ_1 c + γ_2 a (Y | c,a,m,l) = δ_0 + δ_1 c + δ_2 a + δ_3 m + δ_4 l + δ_5 am + δ_6 lm + δ_7 al. Although it is possible to relax this model by adding more and higher-order interactions, we opt for a moderate level of parsimony. For the simplicity of presentation, we also only present coefficients for scalar M and L, while our package readily accommodates vector M and L. Then, TE-TE^R = δ_6 [(M,L | C, A=1)-(M,L | C, A=0)] = δ_6[γ_2-γ_0 α_2 - γ_2 α_0- γ_2 α_2 - (γ_1 α_2 + γ_2 α_1)(C)]. §.§ Empirical Illustration We apply our test to mediation analysis of the Moving to Opportunity (MTO) study, a large-scale longitudinal randomized control trial conducted by the Department of Housing and Urban Development of the United States <cit.>. We follow the conceptual set-up of <cit.> and <cit.>, who estimated the RIAs.[Due to lack of access to the restricted-use dataset, we follow their variable and sample choices only conceptually, not precisely. Hence, our estimates should be regarded as purely illustrative.] The treatment is a binary indicator of whether or not a family living in a high-poverty neighborhood was randomized to receive a Section 8 housing voucher that allowed them to move to a less poor neighborhood. We consider two mediators measured between 10-15 years of follow up, neighborhood poverty and the number of residential moves. The outcome is a composite score for mental health. For causal identification, we account for a post-treatment confounder which is whether the family used the voucher to move within the 90 days allotted. We also account for 12 baseline confounders, which capture baseline household socioeconomic and demographic characteristics, as well as neighborhood-related perceptions and aspirations. We implement our test using double machine learning estimators with two-fold cross-fitting. The nuisance functions are estimated using random forests <cit.>. For confidence intervals, we leverage the asymptotic normality of the estimators and calculate the variance estimates using the mean squared estimated EIFs. We present our estimates in Table <ref>. Our estimate of TE -TE^R is significantly different from 0 (95% Confidence Interval=(0.081,0.087)). Therefore, we reject the null hypothesis that NIE = NIE^R and NDE = NDE^R. In this empirical example, one should not interpret the RIA estimates as the natural effects. Furthermore, the sum of the absolute differences between the NIE and the NIE^R and between the NDE and the NDE^R is greater than |TE -TE^R|, which is estimated to be 0.083. § COVARIANCE PERSPECTIVE We present a covariance-based representation of the differences between the natural effects and their RIAs. We first focus on the simple case with a scalar binary mediator and a randomized treatment such that C is empty. This simple case most easily captures the core intuition. Then we generalize the covariance representation to vector mediators with arbitrary distributions and a non-randomized treatment. The expressions are derived just using the definitions of the estimands, without any identifying assumptions or functional form restrictions. §.§ Single Binary Mediator, Randomized Treatment When the treatment is randomized and the support of M is {0,1}, TE - TE^R = (M_1, Y_1,1-Y_1,0) - (M_0, Y_0,1-Y_0,0) NIE - NIE^R = (M_1 - M_0, Y_1,1-Y_1,0) NDE -NDE^R = (M_0, Y_1,1-Y_1,0-Y_0,1+Y_0,0). First, the difference between the TE and the TE^R reflects how the treatment changes the covariance between an individual's mediator value (M_a) and their mediator effect on the outcome (Y_a,1-Y_a,0).[In the causal decomposition of group disparities proposed by <cit.>, the “selection” component captures the contribution of group-differential selection into treatment to an outcome disparity. Relabelling the group and the treatment in the framework of <cit.> as the treatment and mediator, the selection component can be written as (M, Y_M=1-Y_M=0| A=1)-(M, Y_M=1-Y_M=0| A=0), which coincides with TE - TE^R when treatment is randomized.] The TE will be greater than the TE^R if the treatment (rather than the control) induces more accurate ex-ante expectations of the individual-level mediator effect such that individuals with a higher mediator effect are more likely to select into the mediator value 1. Second, the difference between the NIE and the NIE^R equals the covariance between the treatment effect on the mediator (M_1-M_0) and a net mediator effect on the outcome (Y_1,1-Y_1,0). Thus, if there are common determinants of these two effects, the NIE will differ from the NIE^R. These determinants could be either pre-treatment modifiers of both effects or post-treatment mediators of the first effect which also modify the second effect. In the MTO example, those who are better able to take advantage of the housing voucher (the treatment) to move to a lower-poverty neighborhood (the mediator) may, in turn, be better able to leverage the resources in their new lower-poverty neighborhood to improve mental health outcomes. In that case, the covariance between the treatment effect on the mediator and the net mediator effect on the outcome will be positive. Third, the difference between the NDE and the NDE^R is the covariance between the mediator value under control (M_0) and the interaction effect between treatment and mediator on outcome (Y_1,1-Y_1,0-Y_0,1+Y_0,0). Generally, the natural effects and the RIAs differ to the extent that the potential mediators (M_a) and the potential outcomes (Y_a',m) are correlated with one another. This makes sense as the RIAs are defined using random draws of potential mediators, G_a, that are independent of everything else, whereas the natural effects do not remove the naturally occurring dependency between the potential mediators and the potential outcomes. <cit.> proposes a set of mediation null criteria. In particular, the definition of the “sharper mediation null” is: For each individual in the population of interest, either M_1=M_0 or Y_a,m=Y_a,m' for a, m, and m'. And a valid measure of indirect effect should be zero when the sharper mediation null is true. We note that NIE = [(M_1 - M_0)(Y_1,1-Y_1,0)]. Thus, the NIE clearly satisfies this criterion, while, by Proposition <ref>, NIE^R does not. For example, if half of the population has M_1 - M_0=1 and Y_1,1-Y_1,0=0 while the other half has M_1 - M_0=0 and Y_1,1-Y_1,0=1, the NIE will be zero, but the NIE^R will be 1/4. This is consistent with <cit.>'s results. However, <cit.> proves that the NIE^R does not satisfy the null criterion using a specific counterexample, which might be viewed as a contrived example. In contrast, Proposition <ref> analytically reveals why and when the NIE^R deviates from the null criterion: it is because the NIE^R omits the natural dependency between the treatment effect on the mediator and the mediator effect on the outcome, which is a part of the NIE. To the extent that the correlation between these effects is pervasive in practice, there is nothing “contrived” in the deviation of the NIE^R from the null criterion. §.§ General Case In last subsection, we focused on the case of a binary M and a randomized treatment. Now we generalize our results to a continuous or multivalued discrete vector of mediators and a non-randomized treatment. Again, we do not make any identifying assumptions or parametric restrictions. TE -TE^R = ∑_m ∈ℳ{[(M_1=m), Y_1,m| C] } - {[(M_0=m), Y_0,m| C] } NIE - NIE^R = ∑_m ∈ℳ{[(M_1=m)-(M_0=m), Y_1,m| C] } NDE - NDE^R = ∑_m ∈ℳ{[(M_0=m), Y_1,m-Y_0,m| C] }, where (·) is the indicator function, and ℳ is the support of M. The relationships above directly hold for discrete mediators, but they also hold for continuous mediators if summations are replaced with integrals and the indicator function is replaced with the Dirac delta function. We thus obtain a covariance-based representation analogous to Proposition <ref>. Here, the building blocks are conditional covariances between the potential mediators (M_a) and the potential outcomes (Y_a',m) given baseline confounders C. We further summarize the c- and m-specific covariances by taking expectation over the distribution of C and uniformly taking sum over the support of M. The natural effects and the RIAs generally differ due to the dependency between the mediator and outcome potential values conditional on baseline confounders. Clearly, the natural effects and the RIAs coincide when the cross-world independence assumption (Assumption <ref>) is satisfied. In particular, TE - TE^R still has a highly interpretable form. It reflects the treatment effect on selection into mediator values based on the corresponding potential outcomes. Consider a treatment assignment a, a mediator value m, and a baseline confounder value c. If those who, when assigned a, would take the mediator value m tend to be those whose corresponding potential outcome is higher (among individuals with C=c), then [(M_a=m), Y_1,m| C=c] will be positive. This may happen if treatment a induces somewhat accurate ex-ante anticipation of what outcome m would bring about, and individuals choose M based on this anticipation. And TE - TE^R will generally be non-zero if this induction differs by treatment status for some m and c. An alternative RIA-based decomposition is developed by <cit.> and <cit.>[With relabelled variables, this decomposition also coincides with a disparity decomposition in <cit.>]. In this decomposition, the TE is decomposed to what are called the organic indirect and direct effects (NIE^organic and NDE^organic). (Y_1-Y_0)_TE= (Y_1-Y_1,G_0)_NIE^organic + (Y_1,G_0-Y_0)_NDE^organic. We again show a corresponding covariance representation in the general case. NIE - NIE^organic = -∑_m ∈ℳ{[(M_0=m), Y_1,m| C] } NDE - NDE^organic = ∑_m ∈ℳ{[ (M_0=m), Y_1,m| C] }. Finally, <cit.> propose a related decomposition <cit.>. The intervention underlying this decomposition involves assigning to people with C=c, L_a=l values of mediator randomly drawn from the distribution of M_a' conditional on C=c, L_a'=l. Denoting these random draws by G_a' | L_a, their decomposition is (Y_G_1 | C, L_1-Y_G_0 | L_0)_TE^RL = (Y_1,G_1 | L_1-Y_1,G_0 | L_1)_NIE^RL + (Y_1,G_0 | L_1-Y_0,G_0 | L_0)_NDE^RL. The differences between the natural effects and the estimands above do not have a covariance representation. This is because the way L enters into the estimands makes the NIE^RL the path-specific effect through M but not L <cit.>. Thus, these estimands are conceptually further removed from the natural effects. § STRUCTURAL EQUATION PERSPECTIVE To further facilitate substantive reasoning on the differences between the RIAs and the natural effects, we illustrate some specific data generating processes (DGPs) that would make the NDE coincide with the NDE^R or the NIE with the NIE^R. We express these DGPs using structural equations (generative models) with parametric constraints. Throughout this section, we allow for the existence of L, such that the equivalence between the natural estimands and the RIAs is not guaranteed by cross-world independence. We also do not restrict the number or the distribution of mediators. We first present results with assumed linearity and a randomized treatment, which provides the easiest intuition. Then we extend the results to structural equations without the linearity conditions and treatment randomization. For comparison with parametric constraints below, we note that the nonparametric structural equations with no constraints are as follows: C = ϵ_C A = g_A(C, ϵ_A) L = g_L(C,A, ϵ_L) M = g_M(C,A,L, ϵ_M) Y = g_Y(C,A,L,M, ϵ_Y), where g_C, g_A, g_L, and g_M are arbitrary functions of their arguments. And ϵ_C, ϵ_A, ϵ_L, ϵ_M and ϵ_Y are unspecified inputs for each variable. Importantly, throughout this section, we allow these unspecified inputs to be arbitrarily dependent on one another and all other variables. §.§ Linear Structural Equations, Randomized Treatment Since the treatment is assumed to be randomized, C is empty. We consider the structural equations for A, L, M, and Y. In this subsection, the notation technically only applies to one L and one M variables, but our expressions can be easily extended to accommodate multiple L and M variables without compromising intuition. Under the following linear structural equations with constant coefficients (i.e., all α, β, γ terms are constants), A = ϵ_A L = α_0 + α_1 A + ϵ_L M = β_0 + β_1 A + β_2 L + β_3AL + ϵ_M Y = γ_0 + γ_1 A + γ_2 L + γ_3 M + γ_4 AL + γ_5 AM + γ_6 LM + γ_7 ALM + ϵ_Y, we have NIE-NIE^R = (γ_6 +γ_7)β_3 (ϵ_L), and NDE-NDE^R = γ_7β_2 (ϵ_L) + γ_7 (ϵ_L, ϵ_M).[Clearly, Proposition <ref> is a special case of Proposition <ref>. It is also easy to show that when M is binary, Proposition <ref> recovers Proposition <ref>.] Hence, under the linear structural equations, there are multiple sufficient conditions for either the NIE or the NDE to coincide with their respective RIAs. For example, the NIE and the NIE^R are equivalent if β_3 = 0, i.e., there is no AL interaction in the equation for M. And the NDE and the NDE^R are equivalent if γ_7 = 0, i.e., there is no three-way interaction ALM in the equation for Y. In summary, equivalences can be established by ruling out certain interaction effects. It is possible to have only one of the NIE and the NDE coincide with their RIA. When only one of the natural effects equal its RIA, our test statistic in Section <ref>, TE-TE^R, will capture the deviaion of the other natural effect from its RIA. The next subsection shows that the intuitions from the linear analysis can be extended to the settings where the structural equations are much more unrestricted. §.§ Nonlinear Structural Equations, Nonrandomized Treatment Throughout this subsection, we focus on constraints on the structural equations for Y. Thus, we maintain completely unconstrained structural equations for C, A, L, and M. Again, the structural equations we consider allow L to affect M and Y in some manner. Below, we let g_Y1 and g_Y2 denote arbitrary functions of their arguments. Thus, within these functions, the effects of the variables are left completely unconstrained. If Y = (1-A)g_Y1(C,L,M,ϵ_Y1) + Ag_Y2(C,L,ϵ_Y2), NIE=NIE^R; If Y = g_Y1(C,A,L,ϵ_Y1) + g_Y2(C,M,ϵ_Y2), NDE=NDE^R. The first structural equation rules out any effect of M when A=1. The second structural equation rules out A-M and L-M interactions in the equation for Y, in the sense that the nonparametric function containing M is additively separable from the nonparametric function containing A and L. In summary, in the presence of treatment-induced confounders, it is still possible to make NIE=NIE^R or NDE=NDE^R. However, these equivalences require imposing constrains on relevant structural equations by ruling out interaction effects or two-sided effects. The structural equation constraints we present are sufficient but not necessary to establish equivalences between the natural effects and the RIA. Nevertheless, they are derived with the goal of being maximally flexible, in the sense that they allow as much complexity in functional form as possible without incurring other strong constraints. §.§ Summary In this paper, we answer the question of when natural mediation estimands differ from their randomized interventional analogues. In order to do so, we provide tools for both empirical testing and theoretical reasoning to researchers who wish to estimate and interpret the RIAs. Our test and theories are complementary to one another: when the researcher empirically rejects the null hypothesis of the test, they can conclude with confidence (subject to the chosen significance level) that the natural effects and the RIAs differ; when the researcher has theoretical support for specific structural equations, they may reasonably posit that a particular natural effect and its corresponding RIA are equivalent. With respect to the two theoretical perspectives, the covariance perspective is complete, in the sense that it provides necessary and sufficient conditions for the equivalence between the natural effects and the RIAs; while the structural equation perspective provides simple and intuitive conditions of equivalence even when M is vector-valued with arbitrary distributions. § RELATED ESTIMANDS In causal inference, it is not unusual that a pair of competing estimands is present, where one has a more natural interpretation and the other is easier to identify. Apart from the natural mediation effects and their RIAs, we discuss two other such pairs of estimands: the average treatment effect (ATE) versus the local average treatment effect (LATE) in the IV context; and what we call the natural Mann-Whitney estimand and its RIA. The theory we developed for causal mediation analysis proves to be useful for unifying these three long-standing literatures in causal inference. In particular, we establish a formal equivalence result between estimands in the IV literature and the mediation literature. And we reveal a striking resemblance in structure between the Mann-Whitney estimands and the mediation estimands. §.§ ATE and LATE We first define the ATE and LATE estimands. In line with the notation we use for causal mediation analysis, we consider three temporally ordered variables, A, M, and Y. In the IV context, A is the IV, M is the treatment, and Y is the outcome. Here, we focus on the case where A and M are both binary, and A is randomized, which is a classic setting considered in the modern IV literature <cit.>. Then, the ATE is defined as (Y_A=1-Y_A=0), and the LATE is defined to be (Y_M=1-Y_M=0| M_A=1=1, M_A=0=0), i.e., the average effect of M on Y among those whose M value is induced to increase by an increase in A (those who are the “compliers”). In this subsection, we explicitly write the assignment variables in the potential outcomes to avoid ambiguity. Also note that the labelling of the ATE and the LATE involves a slight abuse of terminology in juxtaposition to mediation estimands, as the “treatment” refers to A in the IV context, while it refers to M in the mediation context. In the IV context, the estimand with a more natural interpretation is ATE, while LATE requires weaker identifying assumptions <cit.>. Just like in the mediation context, applied researchers often interpret a LATE estimate as if it was the ATE <cit.>. We show that there exists a direct equivalence between ATE-LATE and NIE-NIE^R under four standard identifying assumptions for LATE: 1) Exclusion: Y_A=a, M=m=Y_M=m, ∀{a,m}; 2) Independence: A {M_A=1, M_A=0, Y_A=1, Y_A=0}; 3) Relevance: (M | A=1)-(M | A=0) >0; and 4) Monotonicity: M_A=1≥ M_A=0. We also denote the identified functional called the Wald estimand as Wald(Y | A=1)-(Y | A=0)/(M | A=1)-(M | A=0). Under assumptions of exclusion, independence, and relevance, Wald-ATE=(M_A=1-M_A=0, Y_M=1-Y_M=0)/(M_A=1-M_A=0)=NIE-NIE^R/(M_1-M_0), which, further under monotonicity, also equals LATE-ATE.[Also, by Proposition <ref> and the exclusion assumption, NIE-NIE^R=TE-TE^R.] Thus, under the four assumptions identifying the LATE, the difference between the LATE and the ATE is simply the difference between the NIE and the NIE^R scaled by the effect of A on M. This means that, under these assumptions, the LATE differs from the ATE if and only if the NIE differs the NIE^R. Intuitively, (M_A=1-M_A=0, Y_M=1-Y_M=0)=[(M_A=1=1, M_A=0=0),Y_M=1-Y_M=0] captures selection into the subpopulation of compliers based on the effect of M on Y. If there is strong selection, then the local average effect of M on Y among compliers must differ substantially from the corresponding global average effect. There is a long-standing literature on using the Wald estimand to estimate the ATE based on exclusion, independence, relevance, and another additional assumption <cit.>.[In fact, the identification results extend to the entire potential outcome distributions <cit.>.] A weak form of the additional assumption has recently appeared in (2020, Section 16.3) and <cit.>, which can be written as (M_A=1-M_A=0, Y_M=1-Y_M=0)=0. Proposition <ref> shows that this is, in fact, the weakest possible among such assumptions. §.§ Natural Mann-Whitney estimand and its RIA We define the natural Mann-Whitney estimand as [(Y_1 ≥ Y_0)], i.e., the probability of the potential outcome under treatment being greater or equal to the potential outcome under control. It is often referred to as the probability of no harm (the probability of the treatment not worsening the outcome), given that a larger value of Y is desired. This estimand is broadly useful for rank-based evaluation of treatment effects, especially for noncontinuous ordinal outcomes.[A related estimand, (Y_1 > Y_0 | A=1)/(Y_1=1 | A=1), for a binary Y, is called the probability of necessity <cit.>.] We call this estimand a “natural” estimand, because it is an aggregation of an individual-level contrast of potential outcomes. The natural Mann-Whitney estimand is difficult to identify for the same reason that the NIE and the NDE are difficult to identify: just as (Y_1, M_0), the natural Mann-Whitney estimand involves the assignment of two different treatment values to the same individual. Due to the fundamental problem of causal inference <cit.>, the joint distribution of two potential outcomes is impossible to identify even with a randomized treatment. Hence, an assumption analogous to cross-world independence (Assumption <ref>) can also be used to identify the natural Mann-Whitney estimand: Y_1 Y_0 <cit.>. However, this assumption is clearly unlikely to hold. Consequently, an alternative estimand has been used in practice: [(H_1 ≥ H_0)], where H_a is a value randomly drawn from the marginal distribution of Y_a. Clearly, this alternative estimand has the interpretation of a RIA. In contrast to the natural Mann-Whitney estimand, the Mann-Whitney RIA does not aggregate an individual-level contrast. On the other hand, randomization of treatment does enable the identification of the Mann-Whitney RIA. The Mann-Whitney RIA has a long history in statistics, dating back to the Mann-Whitney U test <cit.> and the Wilcoxon rank-sum test <cit.>. Recent methodological development based on the Mann-Whitney RIA includes the probability index model <cit.>, the win ratio <cit.>, and the rank average treatment effect <cit.>. Similar to the mediation literature, conflation of the natural Mann-Whitney estimand and its RIA is pervasive even in methodological work. For example, in a textbook discussion on the Mann-Whitney RIA, <cit.> claims that “If this conclusion is statistically significant, it is very relevant evidence to a physician that most of his patients will be better off with the treatment.” <cit.> states “This allows us to make inference about the potential outcome-based δ through the estimable quantity ξ...", where δ and ξ are respectively the natural Mann-Whitney estimand and its RIA. And <cit.> names the Mann-Whitney RIA the “D-value” and argues that “The D-value has a clear interpretation as the proportion of patients who get worse after the treatment”, in the context where a smaller value of a continuous Y is desirable. Interestingly, despite recurrent confusion, the literature on Mann-Whitney estimands has been clarifying the important differences between the natural Mann-Whitney estimand and its RIA since decades before <cit.> pioneered an analogous inquiry in causal mediation analysis. The early work of <cit.> already notes the possibility of sign reversal in the relationship between the natural Mann-Whitney estimand and its RIA (when 1/2 is subtracted from both), which has been known as the Hand's paradox. Multiple work since has considered various DGPs under which the Hand's paradox is present or absent <cit.>. This line of work is in the same spirit as our theoretical analysis on the relationship between the natural mediation estimands and their RIAs. Lastly, we note that there is also a covariance representation for the difference between the natural Mann-Whitney estimand and its RIA. [(Y_1 ≥ Y_0)] - [(H_1 ≥ H_0)] = ∑_t ∈𝒯∑_s ∈𝒮(t ≥ s) [(Y_1 = t),(Y_0=s)], where 𝒯 and 𝒮 are respectively the supports of Y_1 and Y_0. When Y is binary with the support of {0,1 }, the expression simplifies to (Y_1, Y_0). Clearly, the natural Mann-Whitney estimand differs from its RIA to the extent that Y_1 and Y_0 are dependent on each other. This is in parallel to the natural mediation effects differing from their RIAs to the extent that M_a and Y_a',m are dependent. By redefining the estimands using random draws, RIAs in both cases miss a naturally occurring dependency. The thorny issue created by cross-world treatment assignments for identification cannot be magically waved away by redefining the estimand. §.§ Summary The dilemma facing researchers in all these three fields of causal inference (causal mediation analysis, instrumental variable, and Mann-Whitney estimands) is that a natural estimand is more interpretable but hard to identify while an alternative estimand is less interpretable but easier to identify. Going forward, we recommend four strategies to applied researchers. First, we join <cit.> to call for more clarity in interpreting estimates of the alternative estimands in all three areas. Second, with the addition of our two theoretical perspectives in this paper, now researchers in all three areas are able to reason about when the natural estimand coincides with at least does not have the opposite sign to the alternative estimand. Third, in all three areas, bounding methods have been developed to provide partial identification for the natural estimands <cit.>. Fourth, in causal mediation analysis, we uniquely also provide a falsification test for interpreting the RIAs as the natural mediation effects, which goes beyond theoretical reasoning and provides empirical guidance. § ACKNOWLEDGEMENT We are grateful for a comment from a reviewer at the Annals of Applied Statistics for <cit.>, which inspired us to start this project. We also thank Sameer Deshpande, Hyunseung Kang, Xinran Miao, Chan Park, and Michael Sobel for helpful suggestions. An earlier version of this paper was presented at the American Causal Inference Conference in 2024. We thank the audience for an engaging discussion. § APPENDICES §.§ A1. Proof of Proposition 1 The NIE and NDE are defined in terms of (Y_a,M_a') for two treatment values (a,a'). When M is binary and its support is {0,1}, we rewrite this quantity just using its definition: =(Y_a,M_a') = [Y_a,1M_a' + Y_a,0(1-M_a')] = (Y_a, 0) + [ M_a'(Y_a, 1-Y_a, 0) ] = (Y_a, 0) + {[ M_a'(Y_a, 1-Y_a, 0) | C] }. The NIE^R and NDE^R are defined in terms of (Y_a, G_a') for two treatment values (a,a'). When M is binary, we again rewrite this quantity using its definition: =(Y_a, G_a') = [(Y_a, G_a'| C)] =[(Y_a, 1| G_a'=1, C)(G_a'=1 | C) + (Y_a, 0| G_a'=0, C)(G_a'=0 | C) ] ={(Y_a, 1| C)(M_a'| C) + (Y_a, 0| C)[1-(M_a'| C) ] } = (Y_a, 0) + {(M_a'| C) [(Y_a, 1 - Y_a, 0| C)] } =(Y_a,M_a') - [(M_a', Y_a, 1-Y_a, 0| C)]. Then using the results above, we have the following representations: NIE = (Y_1,M_1-Y_1,M_0) = [(M_1 - M_0)(Y_1,1-Y_1,0)] NIE^R = (Y_1,G_1-Y_1,G_0) = [(M_1 - M_0 | C) (Y_1,1-Y_1,0| C) ] NDE = (Y_1,M_0-Y_0,M_0) = (Y_1,0-Y_0,0) + { M_0 [Y_1,1-Y_1,0-(Y_0,1-Y_0,0)] } NDE^R = (Y_1,G_0-Y_0,G_0) = (Y_1,0-Y_0,0) + {(M_0 | C) [Y_1,1-Y_1,0-(Y_0,1-Y_0,0) | C] }. Hence, NIE = NIE^R + [(M_1 - M_0, Y_1,1-Y_1,0| C)] NDE = NDE^R + {[M_0, Y_1,1-Y_1,0-(Y_0,1-Y_0,0) | C] } TE = TE^R + [(M_1, Y_1,1-Y_1,0| C) - (M_0, Y_0,1-Y_0,0| C)]. When the treatment is randomized, C becomes an empty set, and we obtain the results shown in Proposition 1. §.§ A2. Proof of Propositions 2 and 3 The NIE and NDE are still defined in terms of (Y_a,M_a') for two treatment values (a,a'). Treating M as a vector of continuous variable, we rewrite this quantity using its definition: =(Y_a,M_a') =[ ∫ Y_a,m(M_a'=m) m ] = ∫[Y_a,m(M_a'=m)] m = ∫{[Y_a,m(M_a'=m) | C] } m, where the first equality holds by treating the Dirac delta function (M_a'=m) as a limiting case of a probability density function concentrated at M_a'=m. This allows us to express a function of M_a' as an integral over the support of M_a'. The NIE^R and NDE^R are defined in terms of (Y_a, G_a') for two treatment values (a,a'). We rewrite this quantity as follows: =(Y_a, G_a') = [(Y_a, G_a'| C)] =∬(Y_a, m| G_a'=m, C=c) f_G_a'| c(m) f_C(c) m c =∬(Y_a, m| C=c ) f_M_a'| c(m) f_C(c) m c =∬(Y_a, m| C=c ) [(M_a'=m) | C=c] f_C(c) m c, where the last equality is by the property of the Dirac delta function (M_a'=m). Therefore, NIE = (Y_1,M_1-Y_1,M_0) = ∫{{ [(M_1=m)-(M_0=m)] Y_1,m}| C } m NIE^R = (Y_1,G_1-Y_1,G_0) = ∫{ [(M_1=m)-(M_0=m) | C] ( Y_1,m| C ) } m NDE = (Y_1,M_0-Y_0,M_0) = ∫{[ (Y_1,m-Y_0,m)(M_0=m) | C ]} m NDE^R = (Y_1,G_0-Y_0,G_0) = ∫{[ (Y_1,m-Y_0,m) | C][(M_0=m) | C] } m. And NIE = NIE^R + ∫{[(M_1=m)-(M_0=m), Y_1,m| C]} m NDE = NDE^R + ∫{[Y_1,m-Y_0,m, (M_0=m) | C] } m TE = TE^R + ∫{[Y_1,m, (M_1=m) | C]} - {[Y_0,m, (M_0=m) | C] } m. When M is a vector of discrete variables, we replace the integrals with summations to obtain the results in Proposition 2. Proposition 3 similarly follows from the expressions of (Y_a,M_a') and (Y_a,G_a') derived above. §.§ A3. Proof of Proposition 4 We let L_a denote the potential values of L under treatment assignment a. Under the structural equations of Proposition 4, Y_1 M_1 = γ_0 + γ_1 + (γ_2+γ_4) L_1 + (γ_3+γ_5) M_1 + (γ_6+γ_7) L_1 M_1 + ϵ_Y Y_1 M_0 = γ_0 + γ_1 + (γ_2+γ_4) L_1 + (γ_3+γ_5) M_0 + (γ_6+γ_7) L_1 M_0 + ϵ_Y Y_0 M_0 = γ_0 + γ_2 L_0 + γ_3 M_0 + γ_6L_0 M_0 + ϵ_Y Y_1 G_1 = γ_0 + γ_1 + (γ_2+γ_4) L_1 + (γ_3+γ_5) G_1 + (γ_6+γ_7) L_1 G_1 + ϵ_Y Y_1 G_0 = γ_0 + γ_1 + (γ_2+γ_4) L_1 + (γ_3+γ_5) G_0 + (γ_6+γ_7) L_1 G_0 + ϵ_Y Y_0 G_0 = γ_0 + γ_2 L_0 + γ_3 G_0 + γ_6L_0 G_0 + ϵ_Y. Hence, NDE = γ_1 + (γ_2 +γ_4)(L_1) -γ_2 (L_0)+ γ_5 (M_0) + (γ_6+γ_7)(L_1 M_0) - γ_6 (L_0M_0) NDE^R = γ_1 + (γ_2 +γ_4)(L_1) -γ_2 (L_0)+ γ_5 (G_0) + (γ_6+γ_7)(L_1 G_0) - γ_6 (L_0G_0) NIE = (γ_3+γ_5) (M_1-M_0) +(γ_6+γ_7)(L_1M_1 -L_1M_0) NIE^R = (γ_3+γ_5) (G_1-G_0) + (γ_6+γ_7)(L_1G_1 -L_1G_0). Noting that (M_a)=(G_a), and =(L_a M_a')-(L_a G_a') = (L_a M_a')-(L_a) ( G_a') = (L_a, M_a') = [α_0 + α_1 a +ϵ_L, β_0+β_1a'+β_2(α_0+α_1a'+ϵ_L)+β_3a'(α_0+α_1a'+ϵ_L)+ϵ_M] =(β_2+β_3a')(ϵ_L) + (ϵ_L, ϵ_M). we have NDE-NDE^R = (γ_6+γ_7)(L_1, M_0) - γ_6 (L_0, M_0) = γ_7β_2 (ϵ_L) + γ_7 (ϵ_L, ϵ_M) NIE-NIE^R = (γ_6+γ_7){(L_1, M_1 ) - (L_1, M_0 ) } = (γ_6 +γ_7)β_3 (ϵ_L). §.§ A4. Proof of Proposition 5 For the NDE part, our proof leverages an assumption in <cit.>: Y_1,m-Y_0,m is a random variable not dependent on m. Originally, this assumption was proposed to identify NDE in the presence of treatment-induced confounding. We first prove that this assumption is sufficient for NDE=NDE^R. Then we prove that the structural equation in Proposition 5 is, in turn, sufficient for this assumption to hold. According to our Proposition <ref>, we just need to show that under the assumption of <cit.>, ∫{[Y_1,m-Y_0,m, (M_0=m) | C] } m=0. Let Y_1,m-Y_0,m=B, then, =∫{[(M_0=m), Y_1,m-Y_0,m| C] } m = ∫{[(M_0=m) B | C] - [(M_0=m) | C] (B | C) } m = {[ ∫(M_0=m) m B | C ] - ∫ f_M_0(m | C) m (B | C) } = [ (B | C)- (B | C) ]=0. Next, we show that, if Y = g_Y1(C,A,L,ϵ_Y1) + g_Y2(C,M,ϵ_Y2), the assumption of <cit.> is satisfied. Under this structural equation for Y, = Y_1,m-Y_0,m = g_Y1(C,1,g_L(C,1,ϵ_L), ϵ_Y1) + g_Y2(C,m,ϵ_Y2) - g_Y1(C,0,g_L(C,0,ϵ_L), ϵ_Y1) - g_Y2(C,m,ϵ_Y2) = g_Y1(C,1,g_L(C,1,ϵ_L), ϵ_Y1) - g_Y1(C,0,g_L(C,0,ϵ_L), ϵ_Y1), which is not dependent on m. For the NIE part, we propose a novel condition that is analogous to the assumption of <cit.> used above: Y_1,m is a random variable not dependent on m. We refer to this condition as the analogous assumption. We first show that the analogous assumption is sufficient for NIE to be equal to NIE^R. According to Proposition <ref>, it suffices to show ∫{[(M_1=m)-(M_0=m), Y_1,m| C] } m=0. Let Y_1,m=B, then under the analogous assumption, =∫{[(M_1=m)-(M_0=m), Y_1,m| C] } m =∫{[(M_1=m)-(M_0=m), B | C] } m = ∫{[(M_1=m) B | C]-[(M_0=m) B | C] = - [(M_1=m)-(M_0=m) | C] (B | C) } m = {[∫(M_1=m) m B | C]-[∫(M_0=m) m B | C] =- [∫(M_1=m)-(M_0=m) m | C] (B | C) } = {[B | C] - [B | C] } =0. Then, we show that if Y = (1-A)g_Y1(C,L,M,ϵ_Y1) + Ag_Y2(C,L,ϵ_Y2), the analogous assumption is satisfied. Under this structural equation, Y_1,m=g_Y2(C,g_L(C,1,ϵ_L),ϵ_Y2), which clearly does not depend on m. §.§ A5. Proof of Proposition 6 =Wald =(Y_A=1-Y_A=0)/(M_A=1-M_A=0) =[(M_A=1-M_A=0)(Y_M=1-Y_M=0)]/(M_A=1-M_A=0) =(M_A=1-M_A=0)(Y_M=1-Y_M=0) + (M_A=1-M_A=0, Y_M=1-Y_M=0)/(M_A=1-M_A=0) = ATE + NIE-NIE^R/(M_A=1-M_A=0). The first equality is by the independence assumption, the second is by the exclusion assumption (equation 9 in <cit.>), the third is by the definition of covariance, the fourth is by Proposition <ref> and the exclusion assumption. The relevance assumption makes sure the denominator is nonzero. Finally, under assumptions of exclusion, independence, relevance, and monotonicity, the classic result of <cit.> equates Wald with LATE. §.§ A6. Proof of Proposition 7 =[(Y_1 ≥ Y_0)] - [(H_1 ≥ H_0)] = ∬(t ≥ s)f_Y_1,Y_0(t,s) t s - ∬(t ≥ s)f_H_1,H_0(t,s) t s = ∬(t ≥ s)f_Y_1,Y_0(t,s) t s - ∬(t ≥ s)f_H_1(t)f_H_0(s) t s = ∬(t ≥ s)f_Y_1,Y_0(t,s) t s - ∬(t ≥ s)f_Y_1(t)f_Y_0(s) t s = ∬(t ≥ s)[(Y_1=t)(Y_0=s)] t s - ∬(t ≥ s)[(Y_1=t)][(Y_0=s)] t s = ∬(t ≥ s) [(Y_1 = t),(Y_0=s)] t s. When Y is discrete, this becomes the expression in Proposition 7. Furthermore, when the support of Y is {0,1}, =∑_t ∈𝒯∑_s ∈𝒮(t ≥ s) [(Y_1 = t),(Y_0=s)] = [(Y_1=1),(Y_0=1)] + [(Y_1=1),(Y_0=0)] + [(Y_1=0),(Y_0=0)] = [(Y_1=1)(Y_0=1)] - [(Y_1=1)][(Y_0=1)] =+ [(Y_1=1)(Y_0=0)] - [(Y_1=1)][(Y_0=0)] =+ [(Y_1=0)(Y_0=0)] - [(Y_1=0)][(Y_0=0)] = (Y_1 Y_0) - (Y_1)(Y_0) =+ [Y_1 (1-Y_0)] - (Y_1)[1-(Y_0)] + [(1-Y_1) (1-Y_0)]-[(1-Y_1)][(1-Y_0)] = (Y_1 Y_0) - (Y_1)(Y_0) = (Y_1, Y_0). chicago
http://arxiv.org/abs/2407.02399v1
20240702161657
Vector-like Quark Stabilised Higgs Inflation: Implications for Particle Phenomenology, Primordial Gravitational Waves and the Hubble Tension
[ "John McDonald" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "hep-th" ]
http://arxiv.org/abs/2407.01680v1
20240701180004
Sign changes of the thermoelectric transport coefficient across the metal-insulator crossover in the doped Fermi Hubbard model
[ "Sayantan Roy", "Abhisek Samanta", "Nandini Trivedi" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Department of Physics, The Ohio State University, Columbus OH 43210, USA Department of Physics, The Ohio State University, Columbus OH 43210, USA Department of Physics, The Ohio State University, Columbus OH 43210, USA § ABSTRACT We investigate the doping-dependence of the Seebeck coefficient, as calculated from the Kelvin formula, for the Fermi Hubbard model using determinantal quantum Monte Carlo simulations. Our key findings are: (1) Besides the expected hole to electron-like behavior change around half filling, we show that the additional sign change at an electronic density n_s (and correspondingly a hole density p_s) is controlled by the opening of a charge gap in the thermodynamic density of states or compressibility and not by the pseudogap scale in the single particle density of states. (2) We find that n_s(T,U) depends strongly on the interaction U and shows an unusual non-monotonic dependence on temperature with a maximum at a temperature T≈ t, on the order of the hopping scale. (3) We identify local moment formation close to half filling as the main driver for the anomalous behavior of the thermoelectric transport coefficient. Sign changes of the thermoelectric transport coefficient across the metal insulator crossover in the doped Fermi Hubbard model Nandini Trivedi July 8, 2024 ================================================================================================================================= Introduction: Of particular interest in strongly correlated systems is the idea of emergence <cit.>, where collective behavior with markedly novel properties emerge due to interaction effects. When a Mott insulator (which has an odd number of electrons per unit cell, with strong interactions) is slightly doped, the system can behave very differently from a Landau Fermi liquid with well defined quasiparticles <cit.>. Experimental detection of such novel phases remains a considerable effort in understanding strongly correlated systems <cit.>. Transport is one of the first probes of quantum materials and can reveal striking puzzles about nature of excitations in these enigmatic phases. Thermopower or the Seebeck coefficient is an important transport quantity which measures the efficiency of direct conversion from thermal to electrical energy. In addition, it tracks the nature of carriers in the system. In weakly interacting systems, where Fermi liquid theory holds, the Seebeck coefficient is positive when the excitations are electron-like, but changes to negative when excitations are hole-like <cit.>. However, the presence of strong correlations in the system can lead to anomalous behavior, including a change in sign and magnitude of the Seebeck coefficient compared to Fermi liquid predictions. Earlier experiments found universal zero crossings of the Seebeck coefficient near the optimal doping of cuprates for a large class of materials <cit.>. More recent experiments on thermopower have offered a wealth of information; from sign changes in the Seebeck coefficient attributed to the Fermi surface reconstruction <cit.>, signatures of quantum fluctuations near critical points <cit.>, violation of Fermi liquid behavior in twisted bilayer graphene <cit.>, to novel flatband physics <cit.>. Numerical simulations of transport quantities serve as an important benchmark to understand experimental observations. Over the last decade, several calculations of the Seebeck coefficient have been reported for both model Hamiltonians <cit.> and realistic materials <cit.>. These calculations have established a host of phenomena, e.g. effect of particle hole symmetry on the doping dependence of Seebeck coefficient <cit.>, the relevance of an effective Hubbard-like model for describing cuprate physics <cit.>, skew scattering influencing the anomalous sign change of Seebeck coefficient in non-Fermi liquids <cit.>, and emergence of a “Hund's metal" like behavior when multiple orbital degrees of freedom contribute to transport <cit.>. Seebeck coefficient has provided information about quantum criticality in high-T_c superconductors <cit.> and heavy fermion systems <cit.>, in addition to identifying a crossover region between Fermi liquid behavior and Mott-Ioffe-Regel limit with resilient quasiparticles <cit.>. Several questions remain to be answered and form the focus of this Letter: i) What drives the anomalous sign change of the Seebeck coefficient in the vicinity of Mott insulating phase? ii) Can one find a footprint of the Seebeck anomaly in other thermodynamic observables? If so, what are the temperature and interaction dependencies of such behavior? iii) Since strong correlations tend to localize electrons and form local moments, what role do they play in the Seebeck anomaly? What are the probes to test such behavior? We derive insights about the Seebeck coefficient S in the strongly correlated regime using the Kelvin formula S_ Kelvin = -1/e(∂ s/∂ n)_T <cit.> represented in terms of the entropy density. This can be obtained from the Kubo formula in the slow (thermodynamic) limit instead of the fast (transport) limit while evaluating the Onsager coefficients. Our main results are the following: (1) At intermediate to high temperatures, the appearance of anomalous zero crossings n_s of the Seebeck coefficient S_ Kelvin is governed by the opening of a charge gap in the thermodynamic density of states (TDOS). (2) In presence of strong correlations, n_s(T) shows a non-monotonic behavior finally approaching the expected half filling point at a temperature set by U. (3) The anomalous phase where Seebeck coefficient has the opposite sign compared to Fermi liquid theory is primarily dominated by the formation of local moments. Model and method: We consider the single band Fermi Hubbard model on a square lattice with nearest neighbor hopping and onsite repulsive interaction, ℋ= -t∑_⟨ ij ⟩,σĉ^†_iσĉ_jσ-μ∑_in̂_i+U∑_i( n̂_i↑ - 1/2)( n̂_i↓ - 1/2). The operators ĉ_iσ and ĉ^†_iσ are fermionic annihilation and creation operators, respectively. The number operator is defined as n̂_i,σ≡ĉ^†_iσĉ_iσ, n̂_i = n̂_i ↑ + n̂_i↓, and the particle density per site n = ∑_i⟨n̂_i⟩/N_s, where N_s is the total number of sites. We define hopping amplitude t as the energy scale, μ is the chemical potential and U is the onsite Coulomb repulsion. We perform numerically exact Determinantal Quantum Monte Carlo (DQMC)  <cit.> at intermediate to high interaction strengths. We perform analytic continuation using the maximum entropy package CQMP-MaxEnt <cit.>, with default models chosen to optimize the sum rules <cit.>. We also construct a semi-analytic parton mean field theory in Appendix <ref> to capture the effect of charge gap on the Seebeck anomaly. The evolution of the Seebeck coefficient S_ Kelvin(n,U,T) as a function of density n, interaction strength U and temperature T, is shown in Fig. <ref>(a)-(b). With increasing interaction strength, there is a deviation of the Seebeck coefficient from Fermi liquid like behavior and appearance of an anomalous zero crossing at n_s(T,U) with a “wrong" sign of the Seebeck coefficient that develops near the Mott insulator at half filling. n_s (T,U) shows a striking behavior with temperature in Fig. <ref>(c). Contrary to Hall coefficient calculations, in which the anomaly vanishes monotonically with increasing temperature <cit.>, the Seebeck anomaly increases with increasing temperature, saturates at a temperature scale set by the interaction strength before eventually approaching the free particle behavior. As we discuss below, this anomalous behavior can be understood from the entropic origin of thermopower, and the formation of local moments that dominate transport in this regime. Onset of Seebeck anomaly: The Kelvin formula allows us to relate the Seebeck coefficient to the thermodynamic entropy (defined per unit area), s = 1/T(ϵ_k+ϵ_p-μ n), where ϵ_k, ϵ_p are the kinetic and potential energy densities respectively, and n is the number density. In the non-interacting limit, the entropy is maximum at half filling. For U<U_c(T) the entropy retains its maximum value at half filling. However, for stronger interactions U>U_c(T), the maximum in the entropy shifts to a finite doping, see Fig. <ref>(b); furthermore the peak value continues to shift to higher dopings with increasing temperature initially, shown in Fig. <ref>(a). It is evident from the Maxwell relation, ∂ ^2 s/∂μ^2 = ∂κ̃/∂ T that a maximum of the entropy at finite doping requires ∂κ̃/∂ T = 0 at some chemical potential μ, and increasing the interaction strength U shifts the entropy away from half filling <cit.>. At the critical strength U_c(T), ∂κ̃/∂ T=0 appears at the half filling point, signifying a crossover from a metal below U_c(T) to an insulator above U_c(T) <cit.>. The maximum in entropy as a function of n identifies the location of the sign changes n_s(T,U) of the Seebeck coefficient S_ Kelvin for a given T and U. We argue below that we can determine if a particular set of parameters T,U will have a sign change only at the expected half filling or will also have the anomalous sign change, by considering the behavior of the thermodynamic density of states (TDOS) κ̃=dn/dμ at half filling. Fig. <ref>(c) shows a phase diagram in the T-U plane separating two regions: Region M with ∂κ̃/∂ T <0, from Region I with ∂κ̃/∂ T >0, with a separatrix marking the boundary ∂κ̃/∂ T =0. Region M is a metallic state where there is no gap in the TDOS at the chosen T,U and Region I is an insulating state with a pseudogap or a thermally activated density fluctuations in the TDOS at the chosen T,U. This definition above allows us to extend the concepts of metal and Mott insulator to finite temperatures. Importantly, these regions also coincide with the absence of a Seebeck anomaly in the metallic regions of the phase diagram and conversely, the presence of a Seebeck anomaly in the insulating regions, due to the Maxwell construction stated above. In Appendix <ref>, we explicitly show this case by following the evolution of S_ kelvin with increasing U. Insulator-metal crossover, Seebeck anomaly and local moments: From the behavior of TDOS with temperature, one can identify a doping driven Mott Insulator to metal crossover, shown in Fig. <ref>(a). The Maxwell construction allows one to relate this to the temperature dependence of anomalous zero crossings of S_ kelvin. As shown in Fig. <ref>(b), the doping at which there is a insulator to metal crossover, n_c, closely mirrors the anomalous zero crossings of the Seebeck coefficient n_s. It increases initially with temperature upto T ∼ O(t), before turning around and disappearing with the Seebeck anomaly when the charge gap in TDOS closes. Given that the anomalous Seebeck coefficient is found for strong interaction U, we investigate its connection with local moment formation. The local moment is defined by m^2_i = (n_i↑-n_i↓)^2 and from that we define the connected moment-moment correlation as C_mm(i,j) = 1/N_s∑_ij[⟨ m^2_im^2_j⟩ - ⟨ m^2_i⟩⟨ m^2_j⟩]. C_mm(i,j) captures the conditional probability of having a local moment on site j, given that site i already has a local moment <cit.>. The non-local part of this correlator, C_mm^ nl = ∑_i≠ jC_mm(i,j) thus serves as a global probe of moment correlations. The U dependence of C_mm^ nl at half filling, shown in Fig. <ref>(c) helps identify two regimes: (i) In the weak coupling regime, moment correlations increase with increasing U, peaking at an interaction strength U_ max(T). The size of the local moments, m^2 also grows with U in this regime. (ii) In the strong coupling regime, increasing U beyond U_ max(T) results in decreasing moment correlations; the size of the local moment starts to saturate on crossover into this regime. Such behavior is also seen at finite doping <cit.>. n_s(T,U) in Fig. <ref>(d) differentiates the small U local moment forming regime where n_s is strongly U-dependent from the large U regime of well formed local moments where n_s shows minimal U dependence beyond U_ max(T). The difference between the weak and strong coupling regime, both in terms of U and T dependence can be understood as follows. In the weak coupling regime, increasing temperature washes out the Seebeck anomaly. Since the local moments are not well formed enough, the system tries to minimize free energy f = ε_k+ε_p-Ts by lowering the kinetic energy, which competes with moment formation; this effect increases with temperature. In the strong coupling regime, free energy can be minimized by accessing the larger ln 2 entropy from the well formed moments, hence the doping window over which moments can form increases with increasing temperature. At a fixed temperature, once the moments are well formed at U_ max(T), increasing U contributes weakly to the free energy through ε_p = UD (where D is the average doublon density). This results in minimal dependence of n_s(U;T) in the strong coupling regime. In Ref. <cit.> it is argued that the location of the sign changes of the Seebeck coefficent obtained from the Kubo and Kelvin formulae are similar. We therefore propose the saturation of the peaks of entropy, and hence n_s(U), with increasing U at a fixed temperature in the strong coupling regime, as a possible candidate for the universal doping dependence of the Seebeck anomaly seen in experiments <cit.>. To further demonstrate that local moments are indeed responsible for the increase of Seebeck anomaly with increasing temperature in the strong coupling regime, we turn to a local probe for moment formation. Fermionic anti-commutation relations enforce that each site can have either a local moment m^2_i, a doublon, d_i = ⟨ n_i↑n_i↓⟩ or a holon, h_i = ⟨ (1-n_i↑)(1-n_i↓) ⟩. In the particle doped side, holon occupation is minimal, and doublon and local moment occupation on a single site are hence anti-correlated. The temperature variation of the local doublon number has been studied before in context of adiabatic cooling of fermions in optical traps, where a “Pomeranchuk" like effect can happen depending on the dimensionality of the system <cit.>. Here, we interpret the derivative of the average doublon number D = (1/N_s)∑_i⟨ n_i↑n_i↓⟩, ∂ D/∂ T as an indication of localization; decreasing doublon occupation with cooling indicates tendency of the system to pin down electrons to form local moments, while increasing doublon occupation with cooling indicates an itinerant nature of the electrons to form doubly occupied sites. Armed with this, we compare the Seebeck coefficient and ∂ D/∂ T in Fig. <ref>. The zero crossings of the S_ kelvin, in panel (a) is in very close agreement with the zero crossings of ∂ D/∂ T in panel (b), where the system goes from having localized electrons to more itinerant electrons. In the strong coupling regime, this is indeed the case irrespective of the interaction strength (Fig. <ref>(c)). Local moment formation thus dominates the doping window in which the sign of the Seebeck coefficient reverses, and is a key component in driving the anomaly. This was also seen recently in experiments with MATBG <cit.> due to emergent local moment from flatbands. Conclusion: We have analyzed the behavior of the anomalous thermopower in the repulsive Hubbard model, a prototype for strongly correlated systems such as cuprates and flatband systems. We have shown that in the incoherent regime, where an entropic representation of the thermopower is valid <cit.>, the anomalous sign change of the Seebeck coefficient is brought on by an opening of the charge gap in the thermodynamic density of states (TDOS). Remarkably, the approach to free particle behavior is highly non-monotonic as a function of temperature; the Seebeck anomaly increases with increasing temperature, before finally turning around at a temperature scale set by the interaction strength, and decreases to the free particle limit at a temperature scale where the charge gap in the TDOS closes. This behavior closely mirrors the doping driven Mott insulator to metallic crossover at the corresponding temperature. We identify the origin of the anomalous phase, where the Seebeck coefficient has a divergence and the “wrong" sign, is primarily dominated by local moment formation. Recently such behavior has also been observed in magic-angle twisted bilayer graphene (MATBLG) <cit.> due to emergent moment formation arising from flatbands, solidifying our interpretation of the role of local moments on anomalous transport behavior in strongly correlated systems. It would be interesting to test our prediction on the non-monotonic dependence of the density on temperature at which the Seebeck coefficient changes sign. Our study provides an understanding of behavior of Seebeck coefficient as seen in experiments on cuprates, and highlights the role of thermodynamic many-body quantities in capturing transport in the incoherent regime. It should be noted that since our results are based on thermodynamic arguments, they are quite general and should hold independently in any system with strong correlations exhibiting metal-insulator crossover and possessing particle-hole symmetry. Acknowledgments: S.R., A.S. and N.T. acknowledge support from NSF Materials Research Science and Engineering Center (MRSEC) Grant No. DMR-2011876 and NSF-DMR 2138905. Computations were performed at the Unity cluster of Arts and Science College, Ohio State University. § PARTON MEAN FIELD THEORY In this section, we provide a parton mean field description for the calculation of the Seebeck coefficient. We use a canonical transformation, known as the Schrieffer-Wolff transformation, and derive a low energy effective Hamiltonian for the Hubbard model in the limit U/t≫ 1 by eliminating high energy processes order by order <cit.>. We start with the repulsive Hubbard model H = -t∑_⟨ ij ⟩,σ(ĉ^†_iσĉ_jσ+H.c) + U∑_in̂_in̂_i -μ∑_i n̂_i ≡ H_T + H_0 , where H_0 corresponds to the local interaction term which keeps the states within the same energy sector, and H_T is the hopping term which can connect the states between different energy sectors. H_T is separated into three pieces, H_T = T_0+T_1+T_-1 . T_n(ij) hops a fermion from site j to site i, where the total number of double occupancy due to this process increases by n, i.e. [H_0,T_n] = nUT_n . The effective low-energy Hamiltonian is formally obtained by eliminating terms coupling between the low-energy subspace and the high-energy subspace up to second order, H_ eff = H_0 + T_0+1/U[T_1,T_-1]+𝒪(t^3/U^2). Next we invoke a parton mean field theory, where the electron operators are decomposed into spinful fermionic operators (spinons) and spinless bosonic operators (doublons and holons) <cit.>: ĉ^†_iσ = f̂^†_iσĥ_i+σf̂_iσ̅d̂^†_i . The physical Hilbert space is restored by imposing the following constraint on every site: d̂^†_id̂_i+ĥ^†_iĥ_i+∑_σf̂^†_iσf̂_iσ = 1 , which is implemented in the Hamiltonian by introducing a Lagrange multiplier λ at the mean-field level (i.e. on average). Including all of these, the effective Hamiltonian (Eq. <ref>) using the parton operators can be written as H_ eff = U∑_i d̂^†_id̂_i -t∑_⟨ ij⟩,σ(d̂^†_i d̂_j f̂_iσf̂^†_jσ+ĥ_iĥ^†_j f̂^†_iσf̂_jσ) + J∑_⟨ ij⟩(2n̂^d_i n̂^h_j - ∑_σn̂_iσn̂_jσ̅ + ∑_σf̂^†_iσf̂_iσ̅f̂^†_jσ̅f̂_jσ) -μ∑_i(2d̂^†_id̂_i+∑_σf̂^†_iσf̂_iσ) -λ∑_i(d̂^†_id̂_i+ĥ^†_iĥ_i+∑_σf̂^†_iσf̂_iσ) where J=4t^2/U. Next, we consider the following mean field order parameters: n_d = 1/N_s∑_i⟨d̂^†_id̂_i⟩,    n_h=1/N_s∑_i⟨ĥ^†_iĥ_i⟩ n_f =1/2N_s∑_i,σ⟨f̂^†_iσf̂_iσ⟩,    χ_d =1/zN_s∑_⟨ ij⟩⟨d̂^†_id̂_j⟩ χ_h =1/zN_s∑_⟨ ij⟩⟨ĥ^†_iĥ_j⟩,    χ_f = 1/2zN_s∑_⟨ ij⟩,σ⟨f̂^†_iσf̂_jσ⟩ and obtain the mean field Hamiltonian H^ MF_ eff=∑_k (E_d(k)d̂^†_kd̂_k + E_h(k)ĥ^†_kĥ_k + ∑_σ E_f(k)f̂^†_kσf̂_kσ) with the bosonic and the fermionic energies given by E_d(k) = U-2μ-λ+2Jzn_h+2tγ(k)χ_f E_h(k) = -λ+2Jzn_d-2tγ(k)χ_f E_f(k) = -μ -λ+tγ(k)(χ_d-χ_h)-2Jγ(k)χ_f-2Jzn_f with γ(k) = 2[cos(k_x)+cos(k_y)] and z=4 is the coordination number in two dimensions. We first solve the mean field equations self-consistently on a square lattice and calculate the Seebeck coefficient using the Kelvin formula, as described in the Main text. The parton construction described above is sufficient to give the strong enhancement of the Seebeck coefficient near half filling and its anomalous sign change at intermediate doping, as shown in Fig. <ref>. The presence of only the density terms, n_d, n_h and n_f are sufficient to capture the formation of the Mott plateau in the equation of state curve, due to the presence of Ud̂^†_id̂_i term in the Hamiltonian <ref>. As seen in Fig. <ref>(a), the anomalous sign, as well as the large divergence of the Seebeck coefficient near half filling all follow the formation of the Mott gap in κ̃=∂ n/∂μ. We also note that the parton theory with only density terms capture the correct temperature dependence of the Seebeck divergence near half filling as found in the QMC simulation [see Fig. <ref>(b)]. However, the anomalous zero crossing at finite doping is “insensitive" to temperature, and always sits at n= 0.5, 1.5. This can be understood by noting that in this limit, entropy is strictly configurational in terms of holon, doublon and spinon occupation on the lattice, which peaks at n=0.5, 1.5. The presence of coherent hopping of doublons, holons and spinons (χ mean field order parameters in Eq. <ref>) are enough to generate the temperature variation of n_s, as shown in Fig. <ref>(b). In the parton description, increasing T leads to an increase of n_s, in accordance with QMC, however the exact values are different due to thermal fluctuation which QMC captures more accurately than a mean field description. Regardless, it highlights how charge physics is the sole determining factor in the Seebeck anomaly and the temperature dependence of n_s comes from charge excitations moving around. § SEEBECK COEFFICIENT ACROSS THE METAL TO INSULATOR CROSSOVER In Fig. <ref>, we proposed a phase diagram of the Seebeck anomaly through the temperature variation of the thermodynamic density of states (TDOS), κ̃ = ∂ n/∂μ at half filling. To show that this is indeed the case, we take a constant temperature cut across the phase diagram along T=1.0. The evolution of the Seebeck coefficient, as one moves to the right in the phase diagram, is shown in Fig. <ref>(a), and the evolution of the TDOS through its temperature derivative is shown in (b). At a critical U_c(T), there is a crossover from a metallic to insulating behavior at and near half filling due to ∂κ̃/∂ T>0. Note that at this U_c(T), an anomalous sign change of the Seebeck coefficient also develops near half filling, showing that the anomalous sign change of the Seebeck coefficient near half filling is driven by a metal to insulator crossover in the TDOS.
http://arxiv.org/abs/2407.02641v1
20240702201432
Learning Graph Structures and Uncertainty for Accurate and Calibrated Time-series Forecasting
[ "Harshavardhan Kamarthi", "Lingkai Kong", "Alexander Rodriguez", "Chao Zhang", "B Aditya Prakash" ]
cs.LG
[ "cs.LG", "cs.AI" ]
College of Computing, Georgia Institute of Technology USA hkamarthi3@gatech.edu College of Computing, Georgia Institute of Technology USA lkkong@gatech.edu College of Computing, Georgia Institute of Technology USA arodriguezc@gatech.edu College of Computing, Georgia Institute of Technology USA chaozhang@gatech.edu College of Computing, Georgia Institute of Technology USA badityap@cc.gatech.edu § ABSTRACT Multi-variate time series forecasting is an important problem with a wide range of applications. Recent works model the relations between time-series as graphs and have shown that propagating information over the relation graph can improve time series forecasting. However, in many cases, relational information is not available or is noisy and reliable. Moreover, most works ignore the underlying uncertainty of time-series both for structure learning and deriving the forecasts resulting in the structure not capturing the uncertainty resulting in forecast distributions with poor uncertainty estimates. We tackle this challenge and introduce , that leverages stochastic correlations between time-series to learn underlying structure between time-series and to provide well-calibrated and accurate forecasts. Over a wide-range of benchmark datasets provides around 16% more accurate and 14% better-calibrated forecasts. also shows better adaptation to noise in data during inference and captures important and useful relational information in various benchmarks. Learning Graph Structures and Uncertainty for Accurate and Calibrated Time-series Forecasting B. Aditya Prakash Received ...; Accepted ... ============================================================================================= § INTRODUCTION While there has been a lot of work on modeling and forecasting univariate time-series <cit.>, the problem of multivariate time-series forecasting is more challenging. This is because modeling individual signals independently may not be sufficient to capture the underlying relationships between the signals which are essential for strong predictive performance. Therefore, many multivariate models model sparse correlations between signals based on prior knowledge of underlying structure using Convolutional networks <cit.> or Graph Neural networks <cit.>. However, in many real-world applications, the graph structure is not available or is unreliable. In such cases, the problem of learning underlying patterns <cit.> is an active area of research <cit.> in applications such as traffic prediction and energy forecasting. Most methods use a joint learning approach to train the parameters of both graph inference and forecasting modules. However, most previous works focus only on point forecasting and do not leverage uncertainty when modeling the structure. Systematically modeling this uncertainty into the modeling pipeline can help the model adapt to unseen patterns such as when modeling a novel pandemic <cit.>. Therefore, the learned structure from existing models may not be adapted to noise in data or to distributional shifts commonly encountered in real-world datasets. In this paper, we tackle the challenge of leveraging structure learning to provide accurate and calibrated probabilistic forecasts for all signals of a multivariate time-series. We introduce a novel probabilistic neural multivariate time-series model, (Stochastic Graph Inference for Calibrated Forecasting), that leverages functional neural process framework <cit.> to model uncertainty in temporal patterns of individual time-series as well as a joint structure learning module that leverages both pair-wise similarities of time-series and their uncertainty to model the graph distribution of the underlying structure. then leverages the distribution of learned structure to provide accurate and calibrated forecast distributions for all the time-series. Our contributions can be summarized as follows: (1) Deep probabilistic multivariate forecasting model using Joint Structure learning: We propose a Neural Process based probabilistic deep learning model that captures complex temporal and structural correlations and uncertainty over multivariate time-series. (2) State-of-art accuracy and calibration in multivariate forecasting: We evaluate against previous state-of-art models in a wide range of benchmarks and observe 16.5% more accurate and 14.7% better calibration performance. We also show that is significantly better adapted to provide consistent performance with the injection of varying degrees of noise into datasets due to modeling uncertainty. (3) Mining useful structural patterns: We provide multiple case studies to show that identifies useful domain-specific patterns based on the graphs learned such as modeling relations between stocks of the same sectors, location proximity in traffic sensors, and epidemic forecasting. § METHODOLOGY §.§.§ Problem Formulation Consider a multi-variate time-series dataset 𝒟 of N time-series 𝒟 ={𝐲_i}_i=1^N over T time-steps. Let 𝐲_i ∈ℝ^T denote time-series i and y_i^(t) be the value at time t. Further, let 𝐲^(t)∈ℝ^N be the vector of all time-series values at time t. Given time-series values from till current time t as 𝐲^(1:t), the goal of probabilistic multivariate forecasting is to train a model M that provides a forecast distribution: p_M( 𝐲^(t+1:t+τ) | 𝐲^(1:t); θ), which should be accurate, i.e, has mean close to ground truth as well as calibrated, i.e., the confidence intervals of the forecasts precisely mimic actual empirical probabilities <cit.>. Formally, the goal of joint-structure learning for probabilistic forecasting is to learn a global graph G from 𝐲^(1:t) and leverage it to provide accurate and well-calibrated forecast distributions: p_M( 𝐲^(t+1:t+τ) | G, 𝐲^(1:t); θ) p_M( G| 𝐲^(1:t); θ). §.§.§ Overview models stochasticity and uncertainty of time-series when generating structural relations across time-series. It also adaptively leverages relations and uncertainty from past data using the functional process framework <cit.>. 's generative process can be summarized as: (1) The input time-series values are encoded using a Probabilistic Time-series Encoder () that models a multivariate Gaussian Distribution to model each time-series capturing both time-series patterns and inherent uncertainty. (2) The similarity between the sampled stochastic encoding of each time-series from PTE is used to sample a graph via the Graph Generation Module (). (3) Recurrent Graph Neural Encoder () contains a series of Recurrent neural layers and Graph Convolutional Networks which derive the encoding of each time-series leveraging the learned graph. (4) We also model the similarity of encodings of input time-series with past data using a reference correlation network (). (5) Finally, uses the graph-refined embeddings and historical information from to learn the parameters of the output distribution. §.§.§ Probabilistic Time-series Encoder We first model both the information and uncertainty for each of the N time-series, by using deep sequential models to capture complex temporal patterns of the input time-series sequence 𝐲_i^(t':t). We use a GRU <cit.> followed by Self-attention <cit.> over the hidden embeddings of GRU: {𝐡̅_𝐢}_i=1^N = {Self-Atten(GRU(𝐲_i^t':t))}_i=1^N. We then model the final latent embeddings of univariate time-series as a multivariate Gaussian distribution similar to VAEs <cit.>: μ_𝐡_i, logσ_𝐡_i = NN_h(𝐡̅_i), 𝐡_i ∼𝒩(μ_𝐡_i, σ_𝐡_i). where NN_h is a single layer of feed-forward neural network. The output latent embeddings 𝐇 = {𝐡_i}_i are stochastic embeddings sampled from Gaussian parameterized by {μ(𝐡_i), σ(𝐡_i)}_i. which captures uncertainty of temporal patterns. §.§.§ Probabilistic Graph Generation Module Since it is computationally expensive to model all possible relations between time-series, we aim to mine sparse stochastic relations between time-series along with the uncertainty of the underlying relations. We generate a stochastic relational graph (SRG) G of N nodes that model the similarity across all time-series. We use a stochastic process to generate G leveraging stochastic latent embeddings 𝐇 from . We parametrize the adjacency matrix 𝐀(𝐇), ∈{0,1}^N × N of G by modelling existence of each edge A_i,j as a Bernoulli distribution parameterized by θ_ij derived as: θ_i,j = sig(NN_G_2( NN_G_1(𝐡_i)) + NN_G_1(𝐡_j)) where NN_G1 and NN_G2 are feed-forward networks. We sample the adjacency matrix which captures temporal uncertainty in 𝐡_i, 𝐡_j and relational uncertainty in θ_i,j. (A_i,j = A_j,i) ∼Bernoulli(θ_i,j); ∀ i ≤ j. §.§.§ Recurrent Graph Neural Encoder We combine the relational information from SRG with temporal information via a combination of Graph Neural Networks and recurrent networks: 𝐯_𝐢^(𝐭) = GRU-Cell(𝐮_i^(t-1), y_i^(t)) ; {𝐮_i^(t)}_i = GNN({𝐯_i^(t)}, A) We input h_i as the initial hidden embedding for GRU-Cell at initial time-step t' to impart temporal information from PTE. We finally combine the intermediate embeddings {𝐮_i^(t)}_t=t'^t using self-attention to get Graph-refined Embeddings 𝐔 = {𝐮_i}_i=1^N where: 𝐮_i = Self-Attention({𝐮_i^(t)}_t). §.§.§ Reference Correlation Network This historical similarity is useful since time-series shows similar patterns to past data. Therefore, we model relations with past historical data of all N time-series of datasets. We encode the past information of all time-series into reference embeddings 𝐆 = {𝐠_j}_j=1^K: μ_𝐠_j, σ_𝐠_j = (𝐲_j^(1:t)), 𝐠_j ∼𝒩(μ_𝐠_j, σ_𝐠_j). Then, similar to <cit.> we sample edges of a bipartite Reference Correlation Network S between reference embeddings 𝐆 and Graph-refined Embeddings 𝐔 based on their similarity as: κ(𝐮_i,𝐠_j) = exp(-γ||𝐮_i-𝐠_j||^2), S_i,j∼Bernoulli (κ(𝐮_i,𝐠_j)). where γ is a learnable parameter. To leverage the similar reference embeddings sampled for each time-series i, we aggregate the sampled reference embeddings to form the RCN embeddings 𝐙 = {𝐳_i}_i=1^N as: 𝐳_i ∼𝒩( ∑_j: S_ij = 1 NN_z1(𝐠_j), exp(∑_j: S_ij = 1 NN_z2(𝐠_j)) ) where NN_z1 and NN_z2 are single fully-connected layers. Therefore, the data from reference embeddings that show similar patterns to input time-series are more likely to be sampled. §.§.§ Adaptive Distribution Decoder The decoder parameterizes the forecast distribution using multiple perspectives of information and uncertainty from previous modules: Graph-refined embeddings 𝐔, RCN embeddings 𝐙 and a global embedding of all reference embeddings g derived as: 𝐠 = Self-Attention({𝐠_j}_j). However, information from each of the modules may have varied importance based. Therefore, we use a weighted aggregation of these embeddings: 𝐤_i = l_1𝐮_i + l_2𝐳_i + l_3 𝐠 to get the input embedding 𝐤_i for the decoder where {l_i}_i=1^3 are learnable parameters. The final output forecast distribution is derived as: y_i^(t+1)∼𝒩(NN_y1(𝐤_i), exp(NN_y2(𝐤_i))) §.§.§ Training and Inference The full generative pipeline of is: P(𝐲^(𝐭+1)| 𝐲^t':t, 𝒟) = ∫P(𝐇|𝐲^t':t)_Time-series Encoder () P(A|𝐇)_Graph Generation()P(𝐔|𝐇,A)_Recurrent Graph Neural Encoder() P(S, {𝐠_j}_j|𝐔, 𝒟) P(𝐙|S,{𝐠_j}_j) _Reference Correlation Network()P(𝐲^(𝐭+1) | 𝐙, 𝐔, 𝐠)_Decoder d𝐇 dA d𝐔 d𝐠 dS d𝐙. We train the parameters of to increase the log-likelihood loss log P(𝐲^(𝐭+1)| 𝐲^t':t, 𝒟). Since integration over high-dimensional latent random variables is intractable, we use amortized variational inference like <cit.> and construct the variation distribution q(𝐇,𝐔,𝐙,S,A | 𝐲^t':t, 𝒟) = P(𝐇|𝐲^t':t) P(A|𝐇) P(S|𝐔, 𝒟) q_1(𝐔, 𝐙 | 𝐲^t':t) where q_1 is a fully connected network over {𝐡̅_i} that parameterizes the variational distributions for 𝐔 and 𝐙. The loss is optimized using stochastic gradient descent based Adam optimizer <cit.>. During inference, we generate Monte Carlo samples from the full distribution P(𝐲^(𝐭+1)| 𝐲^t':t, 𝒟) with discrete sampling. § EXPERIMENT SETUP §.§.§ Baselines We compare with general state-of-art forecasting models that include (1) statistical models like ARIMA <cit.>, (2) general forecasting models: <cit.>, <cit.> (3) Graph-learning based forecasting models: <cit.>, <cit.>, <cit.>, <cit.>. Note that we are performing probabilistic forecasting while most of the baselines are modeled for point forecasts. Therefore, for methods like ARIMA, , and , we leverage an ensemble of ten independently trained models to derive the forecast distribution <cit.>. §.§.§ Datasets We evaluate our models against eight multivariate time-series datasets from a wide range of applications that have been used in past works. The main statistics of the datasets are summarized in Table <ref>. (1) : We forecast a symptomatic measure of flu incidence rate based on wILI (weighted Influenza-like illness outpatients) that are provided by CDC for each of the 50 US states. We train on seasons from 2010 to 2015 and evaluate on seasons from 2015 to 2020. (2) : We forecast the weekly incidence of Covid-19 mortality from June 2020 to March 2021 for each of the 50 US states <cit.> using incidence data starting from April 2020. (3) Similar to <cit.>, we use the daily closing prices for stocks of companies in S& P 100 using the package <cit.> from July 2014 to June 2019. The last 400 trading days are used for testing. (4) : We use a popular multivariate time-series dataset for power consumption forecasting used in past works <cit.>. We forecast power consumption for 15-60 minutes. We train for 1 year and test on data from the next year. (5) Traffic prediction: We use 2 datasets related to traffic speed prediction. and <cit.> are datasets of traffic speed at various spots in Los Angeles and San Francisco. We use the last 10% of the time-series for testing. (6) Transit demand: and <cit.> measure bike sharing and taxi demand respectively in New York from April to June 2016. §.§.§ Evaluation Metrics We evaluate our model and baselines using carefully chosen metrics that are widely used in forecasting literature.[Code: <https://anonymous.4open.science/r/Stoic_KDD24-D5A8>. Supplementary: <https://anonymous.4open.science/r/Stoic_KDD24-D5A8/Struct_Supp.pdf>] They evaluate for both accuracy of the mean of forecast distributions as well as calibration of the distributions <cit.>. We use RMSE for point-prediction and CRPS as well as Confidence score <cit.> for measuring forecast calibrations. §.§.§ Forecast Accuracy and Calibration We evaluate the average performance of and all the baselines across 20 independent runs in Table <ref>. provides 8.8% more accurate forecasts (RMSE scores) across all benchmarks with an impressive 16.5% more accurate forecasts in Epidemic forecasting (and ) and is 9% better in traffic benchmarks. For calibration, we observe significantly better CS scores across all benchmarks. also provides 16% higher CRPS scores over the best-performing baseline in each task. has 34.5% and 11.5% better performance in epidemic forecasting and traffic forecasting respectively. In particular, we also observe that baselines like ARIMA, , and which do not learn a graph have 7-15% poorer performance in traffic benchmarks and over 180% poorer performance in benchmark compared to other baselines that learn a graph. §.§.§ Robustness to Noise We evaluate the efficacy of modeling the underlying structure of time-series and learning uncertainty of each time-series in helping models adapt to noise in the datasets. Learning a single global structure to model relations across time-series as well as modeling uncertainty in data can help the model to adapt to noise in datasets during inference. We therefore expect to be further resilient to noise in data during testing. We use the models trained on clean training time-series datasets and inject noise to time-series input during testing. Each input time-series is first independently normalized with 0 mean and unit standard deviation and then add a gaussian noise with standard deviation ρ. We plot the decrease in performance (measured using CRPS) with an increase in noise, measured by ρ in Figure <ref>. First, we observe that as we increase ρ the performance decreases for all models. We observe that the baselines that do not learn a graph structure on average show 45-60% larger decrease in performance compared to the rest of the baselines showing the efficacy of structure learning for robust forecasting. However, 's decrease in performance is significantly less compared to most of the baselines. At ρ=0.2 the average performance decrease in all baselines is at 19% - 27% whereas 's performance decrease is at 9-20%. Therefore, 's ability to model uncertainty of time-series as well as effectively capture structural patterns enables it to provide forecasts that are robust to noise. §.§.§ Ablation Studies We evaluate the efficacy of various modeling choices of . Specifically, we access the influence of graph generation, , and weighted aggregation (Equation <ref>) via ablation studies. on average outperforms all the ablation variants with 8.5% better accuracy and 3.2% better CS. Graph generation is the most impactful for performance followed by . For additional details see Appendix <ref>. §.§.§ Relations Captured by Inferred Graphs We consider various meaningful domain-specific relations for the time-series of these datasets and study how well 's inferred graphs capture them. We observed that inferred strong relations between time-series of stocks of the same sectors in . In the case of and , the graph inferred by between traffic sensors is most highly correlated to actual proximity of sensors with each other compared to other baselines. Finally, the graphs inferred in and are correlated with geographical adjacency of regions and road density of connecting regions. We provide more details in Appendix Section  <ref> § CONCLUSION We introduced , a probabilistic multivariate forecasting model that performs joint structure learning and forecasting leveraging uncertainty in time-series data to provide accurate and well-calibrated forecasts. We observed that performs 8.5% better in accuracy and 14.7% better in calibration over multivariate time-series benchmarks from various domains. Due to structure learning and uncertainty modeling, we also observed that better adapts to the injection of varying degrees of noise to data during inference with 's performance drop being almost half of the other state-of-art baselines. Finally, we observed that identifies useful patterns across time-series based on inferred graphs such as the correlation of stock from similar sectors, location proximity of traffic sensors and geographical adjacency and mobility patterns across US states for epidemic forecasting. While our work focuses on time-series with real values modeled as Gaussian distribution, our method can be extended to other distributions modeling different kinds of continuous signals. Further, only models a single global structure across time-series similar to most previous works. Therefore, extending our work to learn dynamic graphs that can adapt to changing underlying time-series relations or model multiple temporal scales could be an important direction for future research. ACM-Reference-Format Supplementary for the paper "Learning Latent Graph Structures for Accurate and Calibrated Time-series Forecasting" We run our models on an Nvidia Tesla V100 GPU and found that it takes less than 4 GB of memory for all benchmarks. The code for our model is available at an anonymized repository <https://anonymous.4open.science/r/Stoic_KDD24-D5A8> and will be released publicly when accepted. § RELATED WORK §.§.§ Multivariate forecasting using domain-based structural data Deep neural networks have achieved great success in probabilistic time series forecasting. <cit.> trains an auto-regressive recurrent network to predict the parameters of the forecast distributions. Other works including deep Markov models <cit.> and deep state space models <cit.> explicitly model the transition and emission components with neural networks. Recently, <cit.> leverages functional neural processes and achieves state-of-art performance in epidemic forecasting. However, all these methods treat each time series individually and suffer from limited forecasting performance. Leveraging the relation structure among time-series to improve forecasting performance is an emerging area. GCRN <cit.>, DCRNN <cit.>, STGCN <cit.> and T-GCN <cit.> adopt graph neural networks to capture the relationships among time series and provide better representations for each individual sequence. However, these methods all assume that the ground-truth graph structure is available in advance, which is often unknown in many real world applications. §.§.§ Structure learning for time-series forecasting When the underlying structure is unknown, we need to jointly perform graph structure learning and time-series forecasting. <cit.> and <cit.> parameterize the graph as a degree-k graph to promote sparsity but their training can be difficult since the top-K operation is not differentiable. <cit.> uses the Gumbel-softmax trick <cit.> for differentiable structure learning and uses prior knowledge to regularize the graph structure. The graph learned by GTS is a global graph shared by all the time series. Therefore, it is not flexible since it cannot adjust the graph structure for changing inputs at inference time. <cit.> employs a variational auto-encoder architecture and can produce different structures for different encoding inputs. It is more flexible than but needs more memory to store the individual graphs. However, as previously discussed, these works do not model the uncertainty of time-series during structure learning and forecasting and do not focus on the calibration of their forecast distribution. § TRAINING DETAILS The architecture of and is similar to <cit.> with GRU being bi-directional and having 60 hidden units. NN_h, NN_G1, NN_G2, NN_z1 and NN_z2 and GRU of also have 60 hidden units. Therefore, Graph-refined embeddings 𝐔, embeddings 𝐙 and global embedding 𝐠 are 60 dimensional vectors. We used Adam optimizer <cit.> with a learning rate of 0.001. We found that using a batch size of 64 or 128 provided good performance with stable training. We used early stopping with patience of 200 epochs to prevent overfitting. For each of the 20 independent runs, we initialized the random seeds of all packages to 1-20. In general, variance in performance across different random seeds was not significant for all models. § ABLATION STUDIES We evaluate the efficacy of various modeling choices of . Specifically, we access the influence of graph generation, , and weighted aggregation (Equation <ref>) via the following ablation variants of : * : We remove the and GCN modules of and therefore do not use Graph-refined embeddings 𝐔 in the decoder. * : We remove the module and do not use embeddings 𝐙 in the decoder. * : We replace weighted aggregation with concatenation in Equation <ref> as 𝐤_i = 𝐮_i ⊕𝐳_i ⊕𝐠 where ⊕ is the concatenation operator. §.§.§ Forecasting performance As shown in Table <ref>, on average outperforms all the ablation variants with 8.5% better accuracy and 3.2% better CS. On average, the worst performing variant is followed by and , showing the importance of graph generation and leveraging learned relations across time-series. §.§.§ Robustness to noise Similar to Section <ref>, we also test the ablation variants' robustness in the NFI task with performance decrease compared in Figure <ref>. Performance continuously decreases with an increase in ρ. is most susceptible to a decrease in performance with a 22-41% decrease in performance at ρ=0.2 which again shows the importance of structure to robustness. and 's performance degradation range from 15-32%. Finally, we observe that is again significantly more resilient to noise compared to all other variants. §.§ Relations Captured by Inferred Graphs As mentioned before, for all benchmarks there is no `ground-truth' structure to compare against. However, in line with previous works <cit.>, we consider various meaningful domain-specific relations for the time-series of these datasets and study how well 's inferred graphs capture them through following case studies. Note that, for and baselines that use a sampling strategy to construct the graph (, , , ), we calculate edge probability of each pair of nodes based on sampled graphs. For other graph generating baselines, we directly use the graphs inferred. §.§.§ Case Study 1: Sector-level correlations of stocks in Two stocks representing companies from the same sectors typically influence each other more than stocks from different sectors. Therefore, we measure the correlation of edge probabilities for stocks in the same sectors. We first construct a Sector-partion graph as follows. We partition the stock time-series into sectors and construct a set of fully connected components where each component contains nodes of a sector. There are no edges across different sectors. We then measure the correlation of the edge probability matrix with the adjacency matrix of Sector-partion graph. We observed a strong correlation score of 0.73 for graphs generated by . This was followed by graphs from and with correlation scores of 0.67 and 0.55. Other baselines' graphs provided poor correlation scores below 0.35. Interestingly, this trend also correlated with the performance in forecasting with , and being the top three best-performing models. We also observed that the correlation score of was similar to whereas also provided low correlation scores (0.39) though their forecasting performances are comparable. §.§.§ Case Study 2: Identifying location proximity in traffic forecasting benchmarks Since sensors that are located close to together may have a larger probability of showing similar or correlated signals, we study if the generated graphs capture information about the proximity of the location of the traffic sensors. We use the road location information of time-series in and datasets and construct the proximity graph based on pairwise road distances similar to <cit.>. Note that we did not feed any location-based information to the models during training. We again measure the similarity of generated graphs with the proximity graph. We observe that and provide the strongest correlation scores for both and datasets with average scores of 0.7 and 0.61 for and 0.27 and 0.23 for . Due to the lower correlation scores for , the proximity of sensors may not be useful in modeling relations across time-series. Comparing with ablation variants, we observed that both and showed similar correlation scores to . §.§.§ Case Study 3: Inferring geographical adjacency and mobility for Epidemic Forecasting We observe the most confident edges generated by for and tasks and find that most of the edges map to adjacent states or states with strong geographical proximity similar to past works <cit.>. Further, we observed that the specific states on which the graph relations are most confident are also connected by a higher density of roads with frequent commutes <cit.>. This shows that can go beyond simple patterns and infer complex mobility patterns across states leveraging past epidemic incidence data. Hence, exploits the useful relations pertaining to both geographical adjacency and mobility across these states to provide state-of-art forecasting performance.
http://arxiv.org/abs/2407.01782v1
20240701202109
Addressing a fundamental limitation in deep vision models: lack of spatial attention
[ "Ali Borji" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Draft version, July 8, 2024 This material is based upon work supported in part by the DMS-2110868 (JLG), by the Air Force Office of Scientific Research, USAF, under grant/contract number FA9550-23-1-0007 (JLG), the Army Research Office, under grant number W911NF-19-1-0431 (JLG), and the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contracts B640889, B641173 (JLG). Melvin Creff[2] Jean-Luc Guermond[2] July 8, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The primary aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models. Unlike human vision, which efficiently selects only the essential visual areas for further processing, leading to high speed and low energy consumption, deep vision models process the entire image. In this work, we examine this issue from a broader perspective and propose a solution that could pave the way for the next generation of more efficient vision models. Basically, convolution and pooling operations are selectively applied to altered regions, with a change map sent to subsequent layers. This map indicates which computations need to be repeated. The code is available at <https://github.com/aliborji/spatial_attention>. § MOTIVATION The visual world around us is dynamic, and we rarely see the exact same image twice due to variations in lighting and other factors. Similarly, neural activity is not identical even when exposed to the same input. However, not everything in the visual world changes, and often only a small portion of the input varies over short periods (Figure <ref>). Our visual system has evolved to efficiently address this by selectively focusing on and processing important regions of interest. In contrast, deep vision models lack this capability. While there have been some ad-hoc approaches to address this issue, they are not inherent to the models. The main problem lies in operations such as convolution (nn.Conv2d), which are applied to the entire image without the ability to selectively skip parts of it at the hardware level. We argue that this is a major limitation and propose potential solutions for researchers to explore in the future to address this problem. Convolutional neural networks and vision transformers lack this selective processing capability. Although various attention mechanisms have been proposed, they do not perform spatial attention. In transformers, attention operates more like feature-based attention, as described in the attention literature, rather than spatial attention. In the proposed approach, computation is performed on demand. One advantage of this method is that it can be applied solely during inference. The model can be trained using a GPU and then optimized using this approach to enhance inference efficiency. § RELATED WORK Visual attention is the cognitive process of selectively focusing on one aspect of the environment while ignoring others <cit.>. This is essential because the human brain cannot process all visual information simultaneously. There are two main types of attention: 1) Goal-Driven Top-Down Attention: This intentional type is controlled by an individual's goals and expectations. For example, searching for a friend in a crowded place, 2) Bottom-up Attention: This automatic type is triggered by sudden or prominent stimuli, such as a loud noise or a bright flash. The primary purpose of attention is to conserve computational resources by enabling an agent to focus on the most important, task-relevant items and relay them to higher visual areas that require more computational effort. In our visual system, several mechanisms support this function. These range from the hardware aspect of moving the fovea around to specialized mechanisms and circuitry that generate a saliency map to prioritize scene elements for further processing. The closest concept to this work is event cameras <cit.>, which aim to process only the content that has changed at the hardware level by using specially designed cameras. In video surveillance, some systems efficiently detect and focus on frames with notable movement or changes, enhancing surveillance effectiveness. However, while these approaches skip frames, they still process the entire image when they do choose to process a frame. In contrast, the proposed approach processes only specific regions of the image, providing finer granularity compared to previous methods, although it requires some memory to save previous outputs. Saliency methods, which highlight important regions behind model decisions, are primarily used for explainability purposes (e.g., <cit.>). These methods are different from saliency models that attempt to select a subset of image or video or predict eye movements <cit.>. So far, models of saliency (the latter type) and visual recognition have not been integrated to create a model that natively supports both. Additionally, there are approaches that attempt to compress models or prune weights to make them faster <cit.>. However, these methods are not directly related to this work, as our focus is on pruning irrelevant or less important content rather than weights. Other potentially related areas include spiking neural networks <cit.> and predictive coding <cit.>. § A POTENTIAL SOLUTION In the proposed approach, convolution and pooling operations are selectively applied to altered regions, with a change map sent to subsequent layers. This map informs those layers about which computations need to be repeated. Each layer communicates changes so the next layer knows what it needs to recompute, and this process continues until the final layer. To achieve this, each layer must remember its last computation to avoid redundant processing. The basic idea is illustrated in Figure <ref>. First, a change map is computed from subsequent frames (e.g., |I_t - I_t-1|). This change map is sent to the first convolution layer, which updates its previous output[For the very first image, the previous output of the first convolution layer is set to zeros. Please see the code.] values only for the changed regions[Note that there is no need to compare the current output with the previous output to calculate the change map, although this is an option as well.]. Knowing which regions are updated, it can compute a change map for itself and send it to the next layer, and so on. Each layer has its own memory, which means extra memory (in addition to model weights) is needed for housekeeping. The initial change map is set to all ones[A similar initialization can be done by adding a blank frame at the beginning of the sequence.]. At the frame level, frames are subtracted from each other (numpy.abs(I_t - I_t-1)). Convolution is implemented by looping over spatial locations. If there is enough change (determined by L1 or L2 norm greater than threshold τ) inside a receptive field (RF), that RF is processed; otherwise, it is discarded[In practice, a layer uses its previously computed output at time t-1 and only updates some elements within it.]. This results in significant computational savings, as the filters are not applied to unchanged locations. The conv layer keeps track of changes in its output map and generates a binary map where a 1 indicates a change. This map is sent to the subsequent pooling and convolution layers, and each layer saves its output for future use. Notice that our implementation here is even slower than using nn.conv2d on CPU. The main point here is that sequential implementation on CPUs can be modified to save energy and to increase speed. Therefore, further investigation is needed to determine how this concept can be adopted for parallel processing on multi-core CUPs and GPUs. § EXPERIMENTS AND RESULTS The 28 x 28 MNIST digits, both during training and inference, were placed at the center of a black 64 x 64 image. We trained a simple CNN, referred to as CNN1 as illustrated in Figure <ref>, on a GPU with a batch size of 32. Since our primary focus is on the inference stage, we then loaded the weights into a model residing on a CPU[We used a 3100 MHz Intel(R) Xeon(R) CPU with 8 cores and 64 GB RAM]. A single frame was processed at a time (i.e., batch size = 1). We conducted two experiments as detailed below. The results are presented in Table <ref>. §.§ Experiment I: Processing repeated versions of the same image In this experiment, we ran three models on 11 images. Each image is repeated 10 times (110 images in total). The CNN1 model proved to be the fastest because it uses `nn.Conv2d`, which is a parallel implementation on CPU cores. The aim here is to demonstrate that significant computation can be saved when there is no change in the image. Most of the processing is done on the first frame, which is then reused for subsequent frames. This is why the processing time for CNN3 is nearly 1/10th of CNN2. The inputs and results are illustrated in the first rows of Figure <ref> and Table <ref>, respectively. §.§ Experiment II: Processing shifted versions of the same image This experiment is similar to Experiment I, except each of the 11 digits is shifted rightward by one pixel at a time, resulting in 110 images in total. This method causes some regions of the image to remain the same while others change. As a result, CNN3 is slower here compared to its speed in Experiment I because it needs to recompute more information due to the increased amount of changed content. In terms of accuracy, CNN1 and CNN2 are equivalent since their implementations are essentially the same. CNN3, however, can exhibit different performance based on the amount of change (τ). Smaller values of τ lead to more computation and higher accuracy, and vice versa. Overall, the CNNs performed similarly to each other, although they were less accurate compared to Experiment I, due to pixel shifts. The inputs and results are illustrated in the second rows of Figure <ref> and Table <ref>, respectively. The change maps for the input images and the network layers are displayed in Figure <ref>. As we increased the change threshold τ, the accuracy decreased while the speed increased. This relationship is illustrated in Figure <ref>. § CONCLUSION We highlighted a key issue with existing deep learning approaches and proposed a simple solution that can be integrated into current models or used to design new models with inherent attention and memory mechanisms. This work serves as a proof of concept and can be applied to other problems such as object detection, scene segmentation, and action recognition. It also has the potential to help address adversarial examples <cit.>. This method is particularly effective when the input has higher resolution. Our visual system is far more sophisticated and energy-efficient than the most advanced deep learning models available today. Specifically, our early visual system performs extensive preprocessing tasks such as saliency computation, foveation, gaze control, shape and texture processing <cit.>, and background subtraction. These mechanisms not only enhance processing speed and energy efficiency but also provide robustness against input distortions and improve generalizability across various tasks. We should certainly draw inspiration from our own visual system to develop better deep learning models. § APPENDIX Convolution and pooling modules of CNNs are shown in Figures <ref>, <ref>, and  <ref>. plain
http://arxiv.org/abs/2407.02978v1
20240703102223
Mast Kalandar at SemEval-2024 Task 8: On the Trail of Textual Origins: RoBERTa-BiLSTM Approach to Detect AI-Generated Text
[ "Jainit Sushil Bafna", "Hardik Mittal", "Suyash Sethia", "Manish Shrivastava", "Radhika Mamidi" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Project Beyond: An Escape Room Game in Virtual Reality to Teach Building Energy Simulations This work was supported by a grant from the Austrian Research Promotion Agency (FFG) program Stadt der Zukunft, project number FO999887002. Georg Arbesser-Rastburg1, Saeed Safikhani2, Matej Gustin3, Christina Hopfe4, Gerald Schweiger5 Graz University of Technology, Graz, Austria Email: 1georg.arbesser-rastburg@tugraz.at, 2s.safikhani@tugraz.at, 3m.gustin@tugraz.at, 4c.j.hopfe@tugraz.at, 5gerald.schweiger@tugraz.at Johanna Pirker Ludwig-Maximilians-Universität München Munich, Germany Email: jpirker@iicm.edu July 8, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Large Language Models (LLMs) have showcased impressive abilities in generating fluent responses to diverse user queries. However, concerns regarding the potential misuse of such texts in journalism, educational, and academic contexts have surfaced. SemEval 2024 introduces the task of Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection, aiming to develop automated systems for identifying machine-generated text and detecting potential misuse. In this paper, we i) propose a RoBERTa-BiLSTM based classifier designed to classify text into two categories: AI-generated or human ii) conduct a comparative study of our model with baseline approaches to evaluate its effectiveness. This paper contributes to the advancement of automatic text detection systems in addressing the challenges posed by machine-generated text misuse. Our architecture ranked 46th on the official leaderboard with an accuracy of 80.83 among 125. § INTRODUCTION The task of classifying text as either AI-generated or human-generated holds significant importance in the field of natural language processing (NLP). It addresses the growing need to distinguish between content created by artificial intelligence models and that generated by human authors, a distinction crucial for various applications such as content moderation, misinformation detection, and safeguarding against AI-generated malicious content. This task is outlined in the task overview paper by <cit.>, emphasizing its relevance and scope in the NLP community. Our system employs a hybrid approach combining deep learning techniques with feature engineering to tackle the classification task effectively. Specifically, we leverage a BiLSTM (Bidirectional Long Short-Term Memory) <cit.> neural network in conjunction with RoBERTa <cit.>, a pre-trained language representation model, to capture both sequential and contextual information from the input sentences. This hybrid architecture enables our system to effectively capture nuanced linguistic patterns and semantic cues for accurate classification. Participating in this task provided valuable insights into the capabilities and limitations of our system. Quantitatively, our system achieved competitive results, ranking 46 relative to other teams in terms of accuracy and F1 score. Qualitatively, we observed that our system struggled with distinguishing between sentences generated by AI models trained on specific domains or datasets with highly similar linguistic patterns. We have released the code for our system on GitHub[< https://github.com/Mast-Kalandar/SemEval2024-task8>], facilitating transparency and reproducibility in our approach. § RELATED WORKS In the field of detecting machine-generated text, numerous methodologies and models have been examined. A distinguished methodology is the application of the RoBERTa Classifier, which enhances the RoBERTa language model through fine-tuning for the specific purpose of identifying machine-generated text. The proficiency of pre-trained classifiers like RoBERTa in this domain has been affirmed through various studies, including those conducted by <cit.> and additional research by <cit.>. Concurrently, the XLM-R Classifier exploits the multilingual training of the XLM-RoBERTa model to effectively recognize machine-generated text in various languages, as demonstrated by <cit.>. Alternatively, the exploration of logistic regression models that incorporate GLTR (Giant Language model Test Room) features has been undertaken. These models strive to discern subtleties in text generation methodologies by analyzing token probabilities and distribution entropy, as investigated by <cit.>. Furthermore, detection efforts have utilized stylometric and NELA (News Landscape) features, which account for a broad spectrum of linguistic and structural characteristics, including syntactic, stylistic, affective, and moral dimensions, as reported by <cit.> and <cit.>. Additionally, proprietary frameworks like GPTZero, devised by Princeton University, focus on indicators such as perplexity and burstiness to analyze texts for machine-generated content identification. Although the specific technical details are sparingly disclosed, the reported effectiveness of GPTZero in identifying outputs from various AI language models highlights its significance in the ongoing development of machine-generated text detection strategies <cit.>. § BACKGROUND §.§ Dataset For the machine-generated text, the researchers used various multilingual language models like ChatGPT<cit.>, textdavinci-003<cit.>, LLaMa<cit.>, FlanT5<cit.>, Cohere<cit.>, Dolly-v2<cit.>, and BLOOMz<cit.>. These models were given different tasks like writing Wikipedia articles, summarizing abstracts from arXiv, providing peer reviews, answering questions from Reddit and Baike/Web QA, and creating news briefs. As evident from Table <ref>, the training set lacks any sentences generated by the Bloomz model, which stands as the sole model represented in the validation set. This deliberate choice ensures a robust assessment of our model's generalization capabilities across all machine-generated outputs, regardless of the specific model generating them. By exposing our model to diverse machine-generated sentences during training, including those from unseen models like Bloomz in the validation set, we aim to evaluate its ability to effectively generalize to novel inputs and make reliable predictions across the spectrum of machine-generated text. §.§ Task We focused on Subtask-A of the SemEval Task 8 which involves developing a classifier to differentiate between monolingual sentences generated by artificial intelligence (AI) systems and those generated by humans. This classification task is essential for distinguishing the origin of text and understanding whether it was produced by AI models or by human authors. §.§.§ Objective The primary objective is to build a robust classifier capable of accurately distinguishing between AI-generated and human-generated sentences. The classifier should generalize well across various AI models and domains, ensuring consistent performance regardless of the specific model or domain from which the text originates. The goal was to design a model that not only performs this task with high accuracy but also adapts to various AI models and domains. It's crucial for the classifier to accurately identify the origin of sentences, regardless of the technology used to generate them or their subject matter, ensuring broad applicability and effectiveness § SYSTEM OVERVIEW Based on our observation (See <ref>), we discovered that language modeling task encodes the various features required for detection of AI written text. So we used pretrained RoBERTa in most of our architectures so exploit this power of language models. §.§ Full RoBERTa Finetune The Full RoBERTa<cit.> Finetune model, chosen as our baseline, boasted an extensive architecture and possessed the highest parameter count among the models under evaluation. Serving as a comprehensive starting point, this model allowed us to assess the effectiveness of subsequent enhancements in comparison. §.§ LoRA with RoBERTa (Frozen) Incorporating Low Rank Adapters <cit.>, we applied fine-tuning techniques to the RoBERTa model while strategically freezing all layers. This approach enabled us to adapt the model to our specific task domain, leveraging pre-trained representations effectively. §.§ LoRA with LongFormer The limitation of RoBERTa's context length (max 512 tokens) posed challenges for handling lengthy sentences in our dataset. To address this, we investigated LongFormer <cit.>, a model designed to efficiently manage longer contexts. Despite employing LoRA for fine-tuning, the model's performance on the validation set fell short of expectations, indicating potential difficulties in generalization. §.§ RoBERTa (2 Layers unfreezed) + BiLSTM Expanding upon RoBERTa's capabilities, we introduced a hybrid architecture by unfreezing two layers and integrating a BiLSTM network <cit.>. RoBERTa served as the primary encoder for sentence representations, with the subsequent BiLSTM layer trained to classify based on the last hidden state. §.§ RoBERTa (Frozen) + GRU In our endeavor to augment RoBERTa's capabilities, we devised a hybrid architecture by integrating a Gated Recurrent Unit (GRU) <cit.> network with the frozen RoBERTa model. Within this framework, RoBERTa served as the encoder for generating sentence representations, while a subsequent GRU layer was incorporated for sequential processing and classification tasks. This amalgamation aimed to leverage the strengths of both RoBERTa's contextual understanding and GRU's recurrent dynamics, contributing to enhanced performance on our target task. §.§ RoBERTa (Frozen) + BiLSTM In our pursuit of enhancing RoBERTa's capabilities, we devised a hybrid architecture by coupling a Bidirectional Long Short-Term Memory (BiLSTM) network with the RoBERTa model <cit.>. In this setup, RoBERTa functioned as the encoder for sentence representations, while a subsequent BiLSTM layer was employed for classification, utilizing the last hidden state for decision-making. For a detailed visual representation of the model's architecture, please refer to the accompanying Figure <ref>. We explored various methodologies (refer to Table <ref> for detailed performance metrics) before selecting the optimal approach as our final model. Subsequently, we assessed the performance of the chosen model, RoBERTA (Freezed) + BiLSTM, on the test dataset. § EXPERIMENTS §.§ Preprocessing All textual data underwent standard preprocessing steps, including tokenization, lowercasing, and punctuation marks. Additionally, specific domain-related preprocessing, such as handling special characters or domain-specific terms, was performed as necessary. §.§ Hyperparameter Tuning Hyperparameters were tuned using a combination of grid search and random search techniques. We explored various hyperparameter combinations to identify the optimal configuration for each model variant. The configuration for LSTM and GRU used in Table <ref> is , , , with has been found as the best configuration for the models. For RoBERTa+LSTM model's feedforward had a single weight matrix of dimension 512*2. § RESULTS We tested our models on various models on the test set. The results can be viewed in (Table: <ref>). Ranking: Our BiLSTM+RoBERTa model achieved a ranking of 46 out of 125 participants in the competition, demonstrating its competitive performance (as shown in Table <ref>). These results highlight the effectiveness of various models, including BiLSTM+RoBERTa and GRU+RoBERTa, in addressing the task objectives. We submitted BiLSTM+RoBERTa based on its strong performance on the validation set. However, after testing all models listed in Table <ref>, we found that GRU+RoBERTa achieved a significantly better result, with an accuracy increase of approximately 4%. § CONCLUSION In conclusion, our BiLSTM+RoBERTa model effectively tackled the task, achieving competitive results, thanks to its deep learning and pre-trained language model. While a similar model with unfrozen RoBERTa boasted higher precision, its complexity came at the cost of increased parameters. Impressively, our model ranked 46th out of 125 competition entries (Table <ref>), showcasing its potential alongside approaches like GRU+RoBERTa. Interestingly, post-competition analysis revealed GRU+RoBERTa's superior accuracy (by about 4%). This highlights the value of exploring diverse architectures and hyperparameter tuning for peak performance. Moving forward, there are several avenues for future work to explore. Firstly, further experimentation with different model architectures, including alternative combinations of encoders and classifiers, could potentially yield improvements in performance. Additionally, fine-tuning hyperparameters and exploring advanced techniques for model optimization may enhance the robustness and generalization capabilities of our system. Furthermore, incorporating additional contextual information or domain-specific knowledge could potentially augment the model's understanding and performance on specific tasks. Overall, our findings contribute to the ongoing research efforts in natural language processing and provide valuable insights for future developments in this domain. § APPENDIX A §.§ A. Setup In this study, we implemented a methodology aimed at distinguishing human-generated sentences from machine-generated ones within a training dataset. To achieve this, we initially segregated the dataset into two distinct subsets: one containing human-generated sentences and the other comprising machine-generated ones. Subsequently, we trained separate models utilizing these segregated datasets. Specifically, we employed two distinct models for this task : i) Bidirectional Long Short-Term Memory (BiLSTM) model, ii) RoBERTa model. Following the training phase, we proceeded to evaluate the performance of both models on a validation dataset. During this evaluation, we measured the loss incurred by each model when tasked with discerning between human-generated and machine-generated sentences. This evaluation process was crucial for assessing the efficacy and generalization capabilities of the trained models in accurately distinguishing between the two types of sentences. §.§ B. Results The results are in form of graphs in Figure <ref> We noted a consistent pattern across both sets of models – those trained on human-generated sentences and those trained on machine-generated sentences. Specifically, we observed that the losses incurred by human-generated sentences on the validation set exhibited a wider distribution with higher variance, while the losses associated with machine-generated sentences displayed a narrower distribution with lesser variance. This observation leads to a compelling inference regarding the predictive nature of the model losses for each type of data. The wider distribution and higher variance in losses for human-generated sentences suggest a greater level of unpredictability associated with these sentences. In contrast, the narrower distribution and lesser variance in losses for machine-generated sentences indicate a higher level of predictiveness in the model's performance on these sentences. This finding sheds light on the inherent characteristics of human-generated versus machine-generated sentences, particularly regarding their predictability when processed by the trained models. Such insights are crucial for understanding the intricacies of model behavior and the challenges posed by different types of data in natural language processing tasks.
http://arxiv.org/abs/2407.03214v1
20240703154042
Measuring cosmic expansion with diffractive gravitational scintillation of nanoHertz gravitational waves
[ "Dylan L. Jow", "Ue-Li Pen" ]
astro-ph.CO
[ "astro-ph.CO" ]
djow@physics.utoronto.ca Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8, Canada Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON M5S 1A7, Canada Dunlap Institute for Astronomy & Astrophysics, University of Toronto, AB 120-50 St. George Street, Toronto, ON M5S 3H4, Canada Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8, Canada Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON M5S 1A7, Canada Dunlap Institute for Astronomy & Astrophysics, University of Toronto, AB 120-50 St. George Street, Toronto, ON M5S 3H4, Canada Perimeter Institute for Theoretical Physics, 31 Caroline St. North, Waterloo, ON, Canada N2L 2Y5 Canadian Institute for Advanced Research, CIFAR program in Gravitation and Cosmology § ABSTRACT The recent discovery of ultra-long wavelength gravitational waves through the advent of pulsar timing arrays (PTA) has opened up new avenues for fundamental science. Here we show that every PTA source will be diffractively lensed by potentially hundreds of galactic disks transverse to its line of sight, leading to modest modulations in the strain, Δ h / h ∼ 10^-3λ^-1_1 pc., due to wave lensing effects. The induced interference, or scintillation, pattern will be resolvable by coherent PTAs and may be leveraged, alongside fore-ground redshift information, to make precise measurements of cosmic expansion. If future PTA experiments can achieve enough signal-to-noise to detect these small modulations, hundreds of redshift-distance pairs may be inferred from the diffractive lensing of an individual PTA source. Measuring cosmic expansion with diffractive gravitational scintillation of nanoHertz gravitational waves Ue-Li Pen July 8, 2024 ======================================================================================================== § INTRODUCTION The discovery of gravitational waves in ground-based interferometers and, most recently, in pulsar timing arrays (PTAs) <cit.> will enable us to probe the sky on a wide range of wavelengths spanning many orders of magnitude. Gravitational waves are expected to exhibit wave lensing effects, providing new ways of probing dark matter structure on small-scales with millihertz gravitational waves <cit.>. In this letter, we argue nanohertz gravitational wave sources will be diffractively lensed by hundreds of edge-on galaxies towards any given line of sight. While the effect will be small (a sub-percent modulation in strain), it will be ubiquitous and the resulting interference, or scintillation, pattern will be resolvable by galactic DSA-2000 or SKA-era PTAs <cit.>. If detectable, one may effectively measure hundreds of redshift-distance pairs for a single PTA source, leading to measurements of the Hubble constant to one-part-in-ten-thousand. Notably, because of the large bending angles achieved by the diffractive gravitational lensing, the observed lensing time delays will be dominated by the geometric part, as opposed to the Shapiro delay. Thus, the proposed measurement does not rely on precise modeling of galaxy mass profiles, in contrast with previous strong lensing measurements <cit.>. While a detection of the cumulative diffractive effect is futuristic, it may enable us to directly probe changes in the rate of cosmic expansion, as well as anisotropies in cosmic expansion. § DIFFRACTIVE GRAVITATIONAL LENSING Here we will briefly describe the diffractive lensing theory for a simple Gaussian lenses. First, let us consider the case of an isolated Gaussian lens with lens potential: Ψ̂(ξ) = A e^-ξ^2_1/2 ℓ^2_1 e^-ξ^2_2/2 ℓ^2_2, where ξ = [ξ_1, ξ_2] is the physical coordinate in the lens plane, and ℓ_1 ≤ℓ_2 set the angular size of the lens along the semi-minor and -major axes. Here we define the lens potential to be the Shapiro delay of the lens, so that the amplitude, A, has units of time, and for a lens of mass M is of order A ∼ G M / c^3. We want to compute the Kirchhoff-Fresnel integral which determines the complex amplification factor for the observed radiation field due to the lens. It is often convenient to define dimensionless coordinates x = ξ / ℓ_l and y = η / ℓ_l, where η is the relative angular position between the source and lens, and we choose to normalize the angular coordinates by the smaller of the two scales, ℓ_1, ℓ_2. We also define the dimensionless lens potential Ψ(x) = ℓ_1^-2 D^-1Ψ̂(xℓ_1), where D = D_l D_ls / D_s is a combination of the angular diameter distance to the lens, to the source, and between the lens and source. The Kirchhoff-Fresnel integral is then given by F(y) = ν/2π i∫exp{ i ν[ 1/2 |x - y|^2 - Ψ(x) ]} d^2x where ν =(1 + z_l) ωℓ^2 _1/c d, where ω is the angular frequency of the radiation incident at the observer and z_l is the cosmological redshift of the lens. In the diffractive limit, ω→ 0, we can evaluate the integral perturbatively: F(y) = 1 + i ν∫Ψ(x) e^i ν/2 |x - y|^2 d^2x + 𝒪(ϵ^2) where the expansion parameter ϵ≡κν is a product between the dimensionless frequency ν and the convergence, κ≡ |1/2∇^2_ξΨ̂(0)|. The perturbative expansion is valid when ϵ≪ 1 <cit.>. For simplicity, consider an off-axis source centred in the the semi-major axis of the lens, y = [y,0]. The perturbative expansion is obtained analytically: F(y) = 1 + i νκ_r f(y, ν) f(0, ν s^2), f(y, ν) = √(ν/i + ν)exp{- ν^2 y^2/2 (1 + ν^2)}exp{i ν y^2/2 (1 + ν^2)}, where we have defined κ_1 = 1/2∂^2_ξ_1Ψ̂(0) and s = ℓ_2 / ℓ_1. Note that the total convergence is bounded, κ_1 ≤κ≤√(2)κ_1, so that as long as the convergence along the semi-minor axis satisfies κ_1 ν≪ 1, then the diffractive limit holds. Let us note a few things about Eq. <ref>. First, the diffractive modulation is exponentially suppressed whenever y > ν^-1, setting an effective cross-sectional radius, ξ_ max = ℓ_1 ν^-1. Namely, in order for the lens to significantly modulate the flux from the source, the angular separation between the source and lens must be less than ξ_ max. This is equivalent to the familiar relation that a diffractive lens can bend light up to a maximum bending angle of α∼λ / ℓ_1. Secondly, when the source is within this radius, the total flux from the lens – defined a μ = |F - 1|^2 – is of order μ∼κ^2_1 ν^3 √(ν^2 s^4 / (1 + ν^2 s^4)). In the limit that ν s^2 ≫ 1, we obtain μ∼κ^2_1 ν^3, which is simply the diffractive flux for a one-dimensional Gaussian lens. This condition is equivalent to ℓ_2 ≳ r_F, where r_F = √(λd / (1 + z_l)) is the Fresnel scale. Roughly speaking, the Fresnel scale sets the scale of the interference fringes that arise in wave optics. Thus, when one of the axes of the lens is large compared to the Fresnel scale, the lens behaves as a one-dimensional lens with convergence κ_1. The diffractive flux of a one-dimensional lens is always larger than the flux of an axisymmetric lens of equivalent width. Relating the dimensionless quantities to physical scales, we obtain ν = 2 π( ℓ_1/r_F)^2, κ_r = ( r_E/ℓ_1)^2, ϵ ∼νκ_r = 4 π R_s (1 + z_l)/λ, where r_E = √(2 R_s d) is the Einstein radius for a lens with Schwarzschild radius R_s. It immediately follows that the diffractive regime is attained when the wavelength of the radiation is large compared to the Schwarzschild radius of the lens. In this work, we will be interested in the cumulative diffractive flux of many Gaussian lenses. Since the diffractive regime is defined by a perturbative expansion, the fluxes simply add linearly. Thus, to compute the total diffractive flux, we simply add the flux of all the lenses for which the source falls within the lensing cross-section, σ = 2 πξ^2_ max. In the next section we will perform this sum for nanohertz gravitational waves being diffracted by luminous galactic disks along the line of sight. § THE GALACTIC DIFFRACTION GRATING Consider a gravitational wave source observed in a pulsar timing array (PTA) with a typical wavelength of λ = 1 pc. Galactic disks with typical masses on the order of 10^10 M_⊙ have Schwarzschild radii well below 1 pc. Thus, galactic disks towards the direction of the gravitational wave will produce diffractive modulations. To obtain a quick estimate of the cumulative diffractive flux, we consider only the contribution from galaxies seen edge on. As noted in the previous section, highly asymmetric lenses produce larger diffractive fluxes. The Fresnel scale for a parsec-wavelength source at gigaparsec distances is r_F ∼ 10 kpc, which coincides with the typical diameter of a galactic disk. Thus, edge-on galaxies are typically in the regime where ℓ_2 ≳θ_F, so that they behave effectively as one-dimensional lenses. The individual diffractive flux from such a lens is given by μ∼κ^2_1 ν^3 = 32 π^3 ( R_s ℓ_1/λ r_F)^2. An edge-on galaxy with ℓ_1 ∼ 0.1 kpc yields μ∼ 10^-7. This is, of course, a small number. However, in general, there will be a large number of edge-on galaxies contributing to the diffractive flux. In order for an edge-on galaxy with width ℓ_1 to contribute, it needs to be within ξ_ max of the line-of-sight to the source. For an edge-on galaxy at a gigaparsec lensing a nanohertz gravitational wave, one obtains ξ_ max∼ 1 Mpc. In other words, edge-on galaxies within a megaparsec of the line-of-sight towards a PTA source will diffractively modulate the source. One arrives at the same conclusion by noting that a lens diffractively bends light up to an angle of α∼λ / ℓ_1 ∼ 10'. Typically, there will be over ∼ 10^4 galaxies within a megaparsec of any given line of sight out to a distance of D_s ∼ 1 Gpc. If only one in a hundred of these galaxies is edge on (assuming galaxies have a typical aspect ratio of 1:100), hundreds of such galaxies will contribute significantly to the flux, meaning the cumulative diffractive flux is μ_ tot.∼ 10^-5. However, as PTAs are sensitive directly to the strain of the gravitational radiation, the signal-to-noise depends on √(μ_ tot.)∼ 10^-3 - 10^-2. The size of this effect grows linearly with frequency so that at f = 10^-7 Hz, which is well within PTA sensitivities, the effect may grow above the percent level. §.§ Total integrated flux A more robust estimate of the cumulative diffractive effect can be obtained by integrating over the galaxy mass function, as well as possible orientations of the galaxies. Specifically, we will evaluate: μ_ tot.(z_s) = ∫c dz_l/(1+z_l) H(z_l) dM dι × 2 πξ_ max^2(λ, ι, z_l; z_s) μ(M, λ, ι, z_l; z_s) ϕ(M, z_l), where ϕ(M, z_l) is the number density of galaxies with luminous mass M at redshift z_l, and ι denotes the inclination angle of a galaxy relative to the radial direction from the line of sight to the source. The c / (1 + z) H(z) term is dR/dz_l where dR is the proper distance. In principle, we should also integrate over the possible diameters and scale-heights of the galactic disks, but for simplicity we assume a fixed relationship between the radius of the galaxy and the disk mass, R_ gal = a (M / 10^10 M_⊙)^b, where a = 5.5 kpc and b = 0.27 <cit.>. We will assume that every galaxy has a thin disk with a fixed aspect ratio, where the scale-height is one one-hundredth of the radius, h_ gal = 0.1 R_ gal, consistent with the observed value for the Milky Way <cit.>. Taking μ = κ^2_r ν^3 √(ν^2 s^4 / (1 + ν^2 s^4)), we obtain 2 πξ_ max^2 μ = 16 π^2 r_F^2 R_s^2/λ^2√(2 πℓ_r^2/r_F^2 + 2 πℓ_r^2), where ℓ_r is the projected width of the galaxy on the plane of the sky in the radial direction from the line of sight: ℓ_r = max{h_ gal, 2 R_ galcosι}. For the galaxy stellar mass function, we adopt a double Schechter function with empirical parameters given by <cit.>. Now, performing the integral we obtain μ(z_s = 2) = 6 × 10^-6 for λ = 1 pc. The diffractive modulation of the strain is √(μ(z_s =2)) = 2 × 10^-3. The top panel of figure <ref> shows the strain modulation as a function of wavelength. § MEASURING H_0 FROM DIFFRACTIVE LENSING So far we have given a simple order-of-magnitude estimation of the collective diffractive effect of the galactic disks towards any given sight-line for a PTA source. We found that hundreds of edge-on galaxies may lead to just-sub-percent level total modulations of the observed strain. Such a small signal is not observable with present PTA experiments; a gravitational wave background was only very recently reported to 4 σ <cit.>. However, let us consider what we might be able to learn by detecting this diffractive effect if and when it becomes possible in the future. We propose here that it is possible to measure cosmological parameters (specifically the Hubble constant) to a level of precision that would be difficult to achieve through any other means. The basic principle behind this idea is that if the cumulative lensing effect of hundreds of galaxies can be detected in a pulsar timing array, then, coupled with a wide-field redshift survey, one can, in principle, infer hundreds of redshift-distance pairs from a single detection of a PTA source. In other words, one would be able to make extremely precise measurements of cosmic expansion from a single source. With multiple sources, one could potentially probe anisotropies in cosmic expansion to high precision. Cosmological measurements from lensing require precise measurements of the difference in time of arrival between the lensed images. <cit.> show that the diffractive modulation resulting from a single lens can be regarded as a result from the interference between the primary image and a weak diffractive image with a well-defined time delay. It is critical to note that in the diffractive regime the time-delay associated with the diffractive image is dominated by the geometric part of the delay. A lens can form a diffractive image out to y ∼ν^-1. The total phase delay associated with the image is of order ν/2 y^2 - κ∼ν^-1 - κ. Thus, in the diffractive limit (νκ≪ 1) the geometric part of the delay dominates. Now, one can access this time delay directly by Fourier transforming the chromatic interference pattern as a function of frequency. However, PTA sources are effectively monochromatic, and, therefore, in order to extract time-delay information one must rely on the spatial variation of the flux. To sketch how such a measurement might proceed, we must think of the pulsar timing array as an interferometer, as, indeed, it is. For simplicity, we will consider a weakly modulated source composed of many dim diffractive images, each with individual flux, μ_j ∼ 10^-7, and time delays, τ_j. The response of the interferometer is given by V = 1 + ∑_j μ^1/2_j e^i ωτ_j, where we have normalized the intensity of the images so that the primary image has unit flux. If the diffractive lenses are located at some angular position θ_j relative to the source's line-of-sight, we can further re-write the time delay for each diffractive image as the geometric time delay: τ_j = D_j θ^2_j/2c. For simplicity, we will assume that each image forms a delta function in delay space. Typically, however, wave effects smear the delta function out so that the flux from each lens arrives with a broader distribution of time delays. Nevertheless, the peak of this distribution is determined by the geometric delay (Eq. <ref>). Now let us assume that we have knowledge of the foreground towards the gravitational wave source we are observing. In particular, say we know the redshift of the source, z_s, each of the lensing galaxies, z_j, and also the angular position of the lensing galaxies, θ_j, via some wide-field spectroscopic survey towards that line of sight. We can construct an estimate of the lensing time delays: τ̂_j (Ω̂) = D(z_j, z_s; Ω̂) θ^2_j/2c, where Ω̂ = {Ĥ_0, Ω̂_m, Ω̂_Λ} are the cosmological parameters. The hats denote that these quantities represent variable guesses for the true values of the cosmological parameters, which we represent as Ω = { H_0, Ω_m, Ω_Λ}. The estimated time delay is equal to the actual time delay, τ̂_j(Ω̂) = τ_j, when the chosen values of the cosmological parameters equal the true values, Ω̂ = Ω. The dependence of the time delay on Ω̂ comes through the fact that for a given set of cosmological parameters one can infer all of the lensing distances from the two redshifts, z_j, z_s. For low redshifts, the distance will most strongly depend on the Hubble constant, and so for simplicity we will consider the estimated time delay as a function of Ĥ_0 alone, fixing Ω̂_m = Ω_m, Ω̂_Λ = Ω_Λ. Our goal is to measure the time delays of the diffractive images from the PTA and compare this to our estimated values to infer the cosmological parameters. However, because each of the individual diffractive images will generally be extremely dim it will be practically impossible to resolve the individual lensed images on the sky. Instead, we propose a stacking analysis, treating the PTA as a phased array. By adding relative phase offsets to the different antennae, one can effectively point the telescope, selecting for the flux arriving at some particular angular location on the sky. Since we know, a priori, where all the lenses are, we can simply “point" our PTA at each lens, and measure the response. This requires that the angular resolution of the telescope is sufficient to resolve the individual lenses. Typical angular separations of the images will be ∼ 1'. For a galactic PTA with a baseline η∼ 10 kpc,the achievable angular resolution is set by θ∼λ / η∼ 10” for λ = 1 pc. The angular resolution decreases with wavelength, and so the proposed measurement will fail for λ > 10 pc. Note that this requirement is equivalent to stating that the spatial variations in the interference pattern are small relative to the size of the PTA. Figure <ref> shows a simulation of the expected diffractive modulation for λ = 1 pc, where the interference fringes are typically a kiloparsec in size. Now, because the diffractive flux from each image is so small, the signal from each pointing will be indistinguishable from noise. Thus, we must stack the responses from each pointing to recover a signal, multiplying each component by a phase, e^-i ωτ̂_j, determined by our estimate for the time delay for that lens. The idea is that if our guess for H_0 is correct, then our estimated delays will match the actual delays and, therefore, all of the components will be added in-phase with each other, leading to a large cumulative signal. If our estimated delays are incorrect, there will be no signal. The total response that results from this stacking procedure is given by V_ stack = 1 + ∑_j μ^1/2_j e^i ϕ̂_j(Ĥ_̂0̂), where each image is being added with a phase given by ϕ̂_j = ω( τ_j - τ̂_j(Ĥ_0) ). When Ĥ_0 = H_0, then every ϕ̂_j = 0 and we get that V_ stack = 1 + ∑_j μ^1/2_j. The point is that if we know the phase of the incoming images, then we can leverage a coherent PTA to directly sum the strain of the images. In principle, the maximum signal one can measure is the sum of strains, ∑_j μ^1/2_j, which, in general, is much larger than the diffractive modulation we estimated in the previous section. In that section, we estimated the diffractive modulation as the square-root of the total flux, √(∑_j μ_j). When there a hundred lenses, the coherent sum of strains may be an order of magnitude larger than √(∑_j μ_j), so that the actual signal could potentially be much larger than what we predicted. However, in realistic diffractive lensing, the flux from a given lens does not arrive with a single well-defined time delay, but rather a range of delays centred around some value. As a result of this smearing, the amplitude of the stacked signal will be less than ∑_j μ^1/2_j, and the square-root of the total flux is a better estimate of the signal strength. For a single lens, the response is periodic in Ĥ_0 , as the response is also maximized whenever ϕ̂_j = 2 n π. Thus, a single lens is insufficient to infer the Hubble constant unambiguously. In general, multiple lenses will contribute and the response will be a sum of oscillatory functions. In fact, Re [V_ stack - 1] will be a sum of cosines: Re[ V_ stack(Δ H_0) - 1 ] = ∑_j μ^1/2_j cos( 2 πΠ^-1_j Δ H_0 ), where Π_j = λ H_0 / c τ_j. This follows from the fact that at low red-shifts the angular diameter distances as scale as D_l, D_s, D_ls∝Ĥ_0^-1. It follows that the estimated delay (Eq. <ref>) can be written as τ̂_j = τ_j (H_0 / Ĥ_0), where τ_j and H_0 are the true values of the delay and the Hubble constant, respectively. Defining deviations from the fiducial value of the Hubble constant to be Δ H_0 = Ĥ_̂0̂ - H_0, we find that for small deviations, the phase offset is given by ϕ̂ = ω (τ - τ̂) ≈ωτ/H_0Δ H_0. Thus, for small deviations from H_0, the phase offset is linear in Δ H_0, from which Eq. <ref> follows. When the number of lenses is large enough, the response loses its periodic structure and one can unambiguously infer the Hubble constant from the value of Ĥ_0 at which the signal peaks. The width of this peak, which sets the precision of our inference, is approximately σ_H_0 = λ H_0/c ⟨τ_j ⟩. In general, this can be extremely small. For a lens with an impact parameter of 1 Mpc from the line of sight, halfway between the observer and the source at a distance of a gigaparsec, the time delay is roughly ∼ 10^3 years. Thus, for a parsec-wavelength gravitational wave, one could potentially measure the Hubble constant to a precision of σ_H_0 / H_0 ∼ 10^-3; i.e. well below percent level. What is the optimal wavelength to attempt this measurement with? Pulsar timing arrays are sensitive to a large range of wavelengths. The precision on the Hubble constant increases linearly with wavelength; however, the cumulative flux decreases. While larger wavelengths mean larger lensing cross-sections, the actual flux from individual lenses decreases faster with wavelength. Figure <ref> shows the estimated diffractive modulation (Eq. <ref>) alongside the detection level for the Hubble constant, H_0 / σ_H_0 one would obtain at that wavelength. The grey region on the left represents an effective cut-off: for wavelengths below this, the number of galaxies that significantly contribute to the flux is too small to unambiguously determine H_0. The high wavelength cutoff is obtained when the PTA can no longer resolve the individual galaxies: λ / η = 1'. The bottom panel of figure <ref> shows the same information in a different way, plotting the H_0 precision curve in the right panel against the diffractive modulation curve. However, we have converted the diffractive modulation, √(μ_ tot.), to an effective signal-to-noise one would need to achieve with a PTA in order to detect such a small modulation. That is, if the modulation is ∼ 10^-3 then one would need to be able to detect a single gravitational wave source with a signal-to-noise of more than 1000 to detect such a modulation. However, if this signal-to-noise can be achieved, one could determine the Hubble constant to within a part in ten-thousand. This is an extreme level of precision that cannot be achieved with other probes. At this level of precision, the effect of the lens galaxies' peculiar velocities play an important role, as the observed redshifts deviate from the cosmological redshift: z^ obs_l = z_l + δ z_l. The contribution to the redshift from the peculiar velocity results in an error on the estimated phase of the lensed image: δϕ_j = δ z_l ϕ_j / z_l. When this error exceeds roughly 2π-radians, the lens no longer contributes to the signal-to-noise of the Hubble constant. For typical peculiar velocities δ z_l / z_l ∼ 10^-3 <cit.>, and since H_0 / σ_H_0∼⟨ϕ_j ⟩, the result is to place an effective upper-limit on the achievable precision of H_0 / σ_H_0 < 10^4. Now assuming that the signal for the recent 4σ detection of a gravitational wave background is dominated by only a handful of bright super massive black hole mergers (following expectations of galaxy merger rates <cit.>), then at best the current achievable signal-to-noise is S/N∼ 10. This is achieved with 67 pulsars. The signal-to-noise is expected to grow linearly with the number of pulsars. Given that there are tens-of-thousands of pulsars in the galaxy, the upper limit for the PTA signal-to-noise is larger than S/N ∼ 1000. However, achieving precise timing models for that number of pulsars is extremely futuristic. More near term, we may expect next-generation radio telescopes such as the square-kilometre array (SKA) to add hundreds of pulsars to pulsar timing arrays <cit.>. This would already be enough to perform precision cosmology measurements that are competitive with other probes. Achieving this signal-to-noise alone is insufficient for our proposed measurement. Galactic baselines between pulsar pairs are also needed, as well as precisely known distances to each pulsar in order to use the PTA as a coherent phased array. These conditions, while futuristic, are likely to be met by SKA or future SKA-like experiments. § CONCLUSION Unlike most strong lensing effects where one must hope a given source happens to lie close enough to a lens to detect anything, every ultra-long-wavelength gravitational wave source will exhibit diffractive modulation from the galactic diffraction grating. Moreover, since the time delay associated with diffractive images is dominated by the geometric delay, the effect is independent of the precise details of the potentials of the lensing galaxies. While the total effect will be small, by considering PTAs as phased arrays and utilizing foreground information to perform a stacking analysis, it may be detected. Such a detection can be used to make extremely precise measurements of the Hubble constant from a single source.
http://arxiv.org/abs/2407.02151v2
20240702105001
Labeling Sentences with Symbolic and Deictic Gestures via Semantic Similarity
[ "Ariel Gjaci", "Carmine Tommaso Recchiuto", "Antonio Sgorbissa" ]
cs.RO
[ "cs.RO" ]
plain Predicting correlations in superradiant emission from a cascaded quantum system Michael Fleischhauer July 8, 2024 =============================================================================== plain § ABSTRACT Co-speech gesture generation on artificial agents has gained attention recently, mainly when it is based on data-driven models. However, end-to-end methods often fail to generate co-speech gestures related to semantics with specific forms, i.e., Symbolic and Deictic gestures. In this work, we identify which words in a sentence are contextually related to Symbolic and Deictic gestures. Firstly, we appropriately chose 12 gestures recognized by people from the Italian culture, which different humanoid robots can reproduce. Then, we implemented two rule-based algorithms to label sentences with Symbolic and Deictic gestures. The rules depend on the semantic similarity scores computed with the RoBerta model between sentences that heuristically represent gestures and sub-sentences inside an objective sentence that artificial agents have to pronounce. We also implemented a baseline algorithm that assigns gestures without computing similarity scores. Finally, to validate the results, we asked 30 persons to label a set of sentences with Deictic and Symbolic gestures through a Graphical User Interface (GUI), and we compared the labels with the ones produced by our algorithms. For this scope, we computed Average Precision (AP) and Intersection Over Union (IOU) scores, and we evaluated the Average Computational Time (ACT). Our results show that semantic similarity scores are useful for finding Symbolic and Deictic gestures in utterances. § INTRODUCTION Co-speech gestures are crucial in communication between humans <cit.>. Indeed, when we speak, we unconsciously use body gestures to better express our intentions or to emphasize the verbal messages we want to convey. These gestures are also meaningful in the interaction between humans and artificial embodied agents <cit.>, and between humans and social robots <cit.>. With recent advancements in machine learning, there has been an increased research focus on generating co-speech gestures. According to <cit.>, co-speech gestures can be classified into six categories: 1) Iconic gestures visually represent concepts we are verbally communicating e.g., flapping arms to represent a bird, 2) Metaphoric gestures represent abstract content, e.g., feelings, 3) Beat gestures emphasize the speech content and are often unrelated to semantics, 4) Deictic gestures are used to indicate objects, 5) Adaptors are self-touching movements, and finally, 6) Emblematic or Symbolic gestures carry specific meanings and are strongly related to culture, e.g., creating a circle with index and thumb to indicate “OK". Previous research on generating co-speech gestures for artificial agents has primarily focused on two approaches: 1) rule-based methods, where experts handcraft rules to map gestures to speech content, and 2) data-driven methods, where machine learning algorithms learn the mapping rules from data. Rule-based methods produce smooth and human-like gestures but can only consider a limited set of gestures and mapping rules. Data-driven models overcome this limit by learning from data, but the generated gestures often lack natural forms and semantic dependency, particularly with Symbolic and Deictic gestures. Moreover, the meaning of Symbolic and Deictic gestures can vary significantly depending on the culture of people using them <cit.>. For instance, in India, the “Namaste" gesture with palms pressed together is used for greeting, while in Italy, the same gesture is more commonly associated with praying. To address these limitations, we propose two rule-based algorithms to identify and label the words within a given sentence that are semantically related to Symbolic and Deictic gestures. This approach can complement data-driven or other rule-based methods in hybrid configurations, allowing for the generation of various gesture types. For instance, data-driven methods can generate gestures for words not identified by the rule-based algorithms, while smooth transitions between methods can be achieved using interpolation techniques <cit.>. In this work, we focus only on Symbolic and Deictic gestures for three main reasons: 1) they depend on the speech semantics and the culture of people using them, 2) they have specific and easily recognizable shapes, 3) they are not frequently used, so they will not look repetitive when reproduced by a conversational agent. The proposed algorithms are based on semantic similarity scores <cit.> computed with the Cross-Encoder RoBerta model <cit.>. The core idea is to measure the similarity between a set of reference sentences designed by humans, which represent the contexts in which specific gestures are expected to be generated, and sub-sentences within an objective sentence that the artificial agent has to pronounce while using co-speech gestures. We chose gestures from <cit.> and <cit.> recognizable in Italian culture and reproducible using only the upper body. This limitation ensures easier execution by social robots which can control hands and finger movements, such as Tiago <cit.> or Alter-ego <cit.>, as shown in Figure <ref>. We then associated these gestures with a set of reference sentences chosen by human experts. To test our approach, we asked 30 Italian participants to label two sets of 300 sentences generated using a ChatGPT language model. We compared their labels with those produced by our labeling algorithms by computing Intersection Over Union (IOU) and Average Precision (AP). Additionally, we measured the Average Computational Time (ACT) required by each algorithm to label a sentence. Note that the developed algorithms only require a set of reference sentences and a pre-trained LLM model, such as RoBerta, to function. The collected data serves solely as ground truth to validate our methods. The paper is divided as follows: Section <ref> covers related works. Section <ref> details the methodology. Section <ref> describes the experimental setup. Section <ref> presents the results. Section <ref> provides conclusions and suggestions for future research. § RELATED WORK Co-speech gesture generation approaches can be divided into two main categories: rule-based and data-driven. §.§ Rule-based Rule-based methods are commonly used in commercial robots because the generated gestures are defined through clear rules from human behavior studies. For instance, BEAT <cit.> analyzes input text and generates an appropriate gesture sequence based on rules from research on human conversational behavior. In <cit.>, researchers created a library linking frequently used words to a set of handcrafted gestures. The approach in <cit.> involved generating gestures according to specific text styles, while <cit.> automatically identified and stored text-gesture mappings from videos without using learning models. At runtime, this method computes semantic similarity using GloVe embeddings <cit.> to find the most similar sequence of words in the learned text-gesture mapping and retrieve the corresponding gesture. However, the model required increasing memory with increasing mappings found from data, making it challenging to use in real-world applications. Additionally, the set of gestures considered is limited. §.§ Data-driven Data-driven methods aim to learn the mapping rules between gestures and speech features, allowing for the generation of a wide variety of gestures but often with less control over the learned mappings. These approaches can be further categorized into probabilistic, generative, and LLM-based. §.§.§ Probabilistic they rely on probabilistic models to map speech features to a set of animated gestures. For example, <cit.> estimated five expressive gesture parameters from speech-audio. At runtime, speech audio is used as input to generate a sequence of gestures by selecting, for each time frame, the gesture with the most similar parameters. The work in <cit.> used morphemic analysis on utterances to predict the most likely gesture sequence. The utterance is first segmented into expression units using a Random Forest model, and then another model associates each expression unit with a gesture. In <cit.>, a kNN-based algorithm retrieves gestures from an audio-gesture database using similarities computed on poses and audio features. The retrieved sequences are then refined using Generative Adversarial Networks (GANs) <cit.>. While this method overcomes the issue of limited gestures by using the generative network, its computational cost increases with the database size. §.§.§ Generative they use generative models like Transformer architectures <cit.> and Diffusion models <cit.> to overcome the problem of limited gestures. These methods learn end-to-end mappings from speech features to gestures, generating then new gestures from speech input. For example, in <cit.>, the authors extracted speech audio and poses from TED Talks videos of Indian speakers and learned mappings between poses and audio using GANs. Similarly, <cit.> extracted poses, audio, text, and speaker identity features from a TED Talks dataset and used an adversarial scheme to learn a mapping between gestures and the other features. The work in <cit.> used a motion-capture dataset to extract gestures, text, gender, handedness, intended emotion, and acting tasks (narration or conversation). The researchers learned a mapping between text embeddings generated using Transformers and these features, excluding gestures. This combined representation, along with past gestures, was then fed into a Transformer decoder to generate gestures for subsequent time steps. While these methods can generate a wide variety of gestures, they often lack control over the generation process, producing gestures with forms unrelated to speech semantics. §.§.§ LLM-based they use Large Language Models (LLMs) to enhance the semantical dependency of generated gestures. For instance, <cit.> used ChatGPT to determine the gesture types and timings of a sentence, retrieved the corresponding gestures from a database, and then combined them with rhythmic gestures generated by generative models. Similarly, in <cit.>, authors used ChatGPT prompts to retrieve gesture types for sentences from annotated examples and to suggest gesture types for sentences without annotated examples. While both <cit.> and <cit.> use LLMs as black boxes to predict gesture types, our approach employs heuristics on semantic similarity scores computed by the RoBerta model <cit.> to label only specific words related to a set of predefined gestures. This method allows for more precise labeling, greater control over the LLMs' behavior, and integration with other methods more suitable for generating different types of gestures. § METHODOLOGY In this study, we propose two rule-based labeling algorithms designed to annotate sentences with Symbolic and Deictic gestures based on semantic similarity scores produced by a recent Large Language Model (LLM), RoBerta. Additionally, we introduce a baseline labeling algorithm that uses a statistical approach to label sentences. LLMs have demonstrated their ability to capture text semantics by achieving high scores in the Semantic Textual Similarity (STS) benchmark <cit.>. This suggests that it is possible to associate a semantic-dependent gesture with a set of reference sentences that embed the contexts in which the gesture is often reproduced. The choice of gestures was informed by studies <cit.>, <cit.>, <cit.>, while the reference sentences were selected by four experts in human-robot interaction. More formally, we defined a set of sets of sentences D = {S_1,..., S_l,...S_k}, i.e. the reference sentences, that we heuristically assume are capable of capturing the contexts in which k gestures G = {g_1,...,g_l,...,g_k} are generated. Each S_l = {s_1,...,s_i,...,s_n_l} is a subset of D that contains a set of n_i sentences s_i representing the contexts on which the corresponding gesture g_l is supposed to be generated. Given an objective sentence S_obj composed of a sequence of words W_obj=⟨ w_1,...,w_n_obj⟩, we aim to find all sequences of words W_g ↪W_obj related to the semantics of the gestures G. Each sequence inside W_g is manually associated with a corresponding gesture by a human participant and called real label, while W_p contains the labels predicted by our algorithms. The number of words inside a label will be called window size. For example, W_g = ∅ if no words inside S_obj are related to the semantics of G. If, instead, S_obj contains a subsequence of three words ⟨ w_1,w_2,w_3 ⟩ related to g_1, and another subsequence of two words ⟨ w_5,w_6 ⟩ related to g_3, then W_g = ⟨⟨ w_1,w_2,w_3 ⟩_g_1,⟨ w_5,w_6 ⟩_g_3⟩. Ideally, our algorithms should predict W_p ≡ W_g. This approach is illustrated in Figure <ref> with a practical example. In this work, we assume that each word is assigned to at most one gesture. We also assume that each of the Symbolic and Deictic gestures we chose is used in different contexts from the others, so s_i ≠ s_j ∀ i ≠ j, given that s_i represent a specific context through semantics. Identifying D and W_g is challenging since co-speech gestures are idiosyncratic <cit.>, leading different people to use different gestures also when context and pronounced words are the same. However, Symbolic and Deictic gestures are culturally and semantically encoded <cit.>,<cit.>, so we assume that people from the same culture will use similar gestures in similar contexts. In the following, we describe the algorithms we developed. §.§ Baseline algorithm The Baseline algorithm uses a statistical approach to label a sentence S_obj. If ground truth labeled sentences are provided by human participants, the algorithm labels using the following statistics for each gesture: P(label with gesture_i | gt_labels) = total labels gesture_i/total labels win_g_l | gt_labels∼𝒩(μ_g_l, σ_g_l^2) where gt_labels represents the ground truth labels, 𝒩 represents a normal distribution, while μ_g_l and σ_g_l^2 are the mean and the variance of the window size for the gesture g_l, indicated as win_g_l, extracted from ground truth labels. Note that the window size win_g_l is rounded to the nearest integer since it cannot have floating values. We also limited the maximum value of win_g_l to a constant w_max, which is the same also for the following algorithms. Our labeling choice enables the labeling of sentences based on the ground truth distribution. In the absence of ground truth data, alternative distributions can be utilized. We finally assign to each label a similarity score with a random value above a certain threshold th_0, the same used for the following algorithms. A practical example of this algorithm is shown in Figure <ref>. §.§ Fixed Window algorithm The Fixed Window algorithm uses a fixed window size for each gesture g_l, with l = 1,...,k. This means that every time we compute the semantic similarity between a sequence of words inside a sentence and s_i ∈ S_l, the window size depends on the g_l related to S_l, and this dependency is indicated with win_g_l as previously shown. For example, if g_1 is assigned a window size win_g_1 = 3, then ∀ s_i ∈ S_1, semantic similarity is computed as SemSim(s_i,S_obj[w_j,...,w_j+3]). Given S_obj, starting from the word having index j=1, we compute v_i = SemSim(s_i, S_obj[w_j,...,w_j+win_g_l]) for every i=1,...,n_l, with s_i ∈ S_l, and for every l = 1,...,k. At the end, we have a set Z with l subsets V_l ∈ Z, each subset V_l containing n_l values v_i ∈ V_l. We then find: [ v^* = max{{v_i ∈ V_l : i = 1,...,n_l}| V_l ∈ Z, l = 1,...,k} ] that is the highest value among all the v_i computed. If v^* is not above a given threshold th_0, we update j to j+1 (i.e., we move the windows forward), and restart the computations. Otherwise, we extract the sentence s^*, the gesture g^*, and the window size win_g^* that yielded v^*. We finally assign the label g^* to the words ⟨ w_j,...,w_j+win_g^*⟩_g^*, add the label to W_p, and update the index j to j+win_g^*. The set of all v^* computed will be called V^* in the next. If j+win_g_l exceeds the sentence length n_obj, g_l is skipped and not considered for the computation of v^*. We continue these steps until we reach the end of the sentence. To determine the window size for each gesture, we run this algorithm over two sets of 300 sentences produced with an OpenAI language model, repeating the process for all possible window sizes (win = 1,...,w_max) and gestures G. We associate each gesture with the window size that returns the maximum V^* - σ_V^*, where V^* and σ_V^* are the average and the standard deviation of all the v* values computed across all the sentences. We considered as valid only the window sizes that obtained at least 10 v^* values, which must be above the threshold th_0. While the computational cost of finding the best window size increases with the number of gestures and sentences, this process can be computed offline. This significantly reduces the computations needed to label S_obj and makes the method more suitable for real-time applications. It is important to note that data is not strictly required: if a set of sentences is provided, then the algorithm computes the best window size for each gesture by using statistics and similarity scores, otherwise values can be associated randomly or given according to preferences. A practical example of the Fixed window is shown in Figure <ref>. §.§ Moving Window algorithm The Moving Window algorithm does not use a fixed window size for each gesture g_l, with l = 1,...,k. For this reason, we also aim to find win^* that yields the maximum value v^*. Given a sentence S_obj with n_obj words, starting from index j=1, j=1...n_obj, we compute v_i,win = SemSim(s_i,S_obj[w_j,...,w_j+win]) with s_i∈ S_l, for each i=1,...,n_l, win = 1,...,w_max , and for every l = 1,...,k. We then compute: [ v^* = max{{v_i,win∈ V_l : i = 1,...,n_l ,; win = 1,...,w_max}| V_l ∈ W, l = 1,...,k} ] If v^* is not above a given threshold th_0, we update j to j+1 and repeat the operations above. Labeling sentences using this method is computationally heavy and may not be suitable for real-time applications. Here, we computed all the possible semantic similarity scores in advance to avoid further complications. Given that we use a variable window size, we added two controls after computing v^*. First, we check if the gesture g^* associated with v^* is consistent by considering additional words in S_obj. This check is performed by calculating, when it is possible, v_check = SemSim(s^*, S_obj[w_j,...,w_j+win^*+1]), where s^* and win^* are respectively the window size and sentence related to the context of g^* that returned v^*. If v_check-v^*>th_1, where th_1 is a threshold chosen heuristically by us, we do not accept the result and repeat the computation of v^* for different gestures, excluding g^*. In a few words, we try to expand the moving window from win^* to win^*+1 to check if the context changes. This process is repeated p times, where p is a hyperparameter: if we don't find a valid v^* for p tries, we update j to j=j+1 and move the window forward. To show the necessity of this process, consider the example where v^* = 0.9, s^* = "I love", S_obj = “I love fighting". Even if v^* is high, using a gesture related to “love", such as forming a heart shape with fingers, may not be the best choice. Indeed, by increasing the window size by one, it becomes clear that a gesture to represent “fight", like forming punches in front of the chest, would be more appropriate. In this case, we expect that v_check computed with words ⟨ w_j,...,w_j+win^*+1⟩ and s^* would return a much lower value with respect to v^*, indicating a context change. Secondly, we check if there is another index j_check between j and j+win^* that returns a better result v^*_check > v^*, with s^*_check, win^*_check relative to v^*_check, and that also meets the first check. If such an index is found, we replace the result previously computed with the newer one, and we update j to j_check + win^*_check. Otherwise, we take the previous result and update j to j + win^*. The process is repeated until we reach the end of the sentence. While the computational cost of this algorithm is higher than the others, as it increases with the number of reference sentences and with the length of the processed S_obj, it should be noted that each semantic similarity score can be computed independently before finding v^*, making the algorithm highly parallelizable. A practical example of the Moving Window algorithm is shown in Figure <ref>. All three algorithms use a maximum window size of 10 words w_max=10. This limitation avoids capturing long contexts that are unlikely to represent the gestures. Additionally, we did not address the problem of synchronizing gestures with robot speech and movements, as we did not generate them in real-world scenarios. This problem may require further theoretical considerations, e.g., maintaining or repeating the gesture if its execution time is faster than the time needed to pronounce the labeled words <cit.>. § EXPERIMENTAL SETUP We tested the three algorithms by considering 11 Symbolic gestures from <cit.>,<cit.>, and one Deictic gesture [Code and data are available on the following repository: <https://github.com/arielgj95/Gestures_labeling.git>]. We selected all the gestures common to different cultures from <cit.> and gestures typical of the Italian culture from <cit.>. We then carefully picked a subset of gestures that can be easily reproduced by various social robots with movable hands and fingers, such as Tiago or Alter-ego. We recruited three Italian participants to validate the gestures before the experiments. We showed them all the gestures and the contexts in which they are used, and asked if they recognized and often used the gestures in the described contexts. Finally, we retained only the 12 gestures that received unanimous positive feedback and named them according to the conventions of the reference works <cit.> and <cit.>. The set of the chosen gestures is briefly described in Figure <ref>. It is composed by: Greeting, I don't know, No, Yes, Run, Stop <cit.>, I am exulting, I apologize, I beg you, C’mon it’s late, I praise you, I am exulting <cit.>, and a Deictic pointing gesture <cit.>. Note that in this experiment we are only interested in the pointing gesture's relation to speech context, regardless of its direction. For each gesture, we also asked four experts on human-robot interaction to write four different sentences that reproduce the contexts in which the gestures are generated, thus obtaining four sets of reference sentences D. These sentences are needed to compute the semantic similarity scores with sub-sentences inside objective sentences. To produce a set of objective sentences S_obj we adopted the following approach. We generated two sets of 300 first-person sentences each by creating specific prompts for the OpenAI's “gpt-3.5-turbo-16k" model. The first set, Sentences A, was generated using a single prompt that required the model to produce 300 different sentences, each containing approximately 15 words and one or more contexts related to the considered gestures in various scenarios (school, cinema, etc.). The sentences were designed to be easy to understand, even by non-native speakers. The second set, Sentences B, was generated similarly but used multiple prompts, each related to a specific scenario (work, school, hobbies, etc.). For this set, the model was asked to generate sentences that may or may not include words related to gestures, making the task more challenging and better representative of real-world utterances. We ensured that each sentence was unique. We then recruited 30 native Italian speakers with at least B1 English proficiency to label Sentences A and Sentences B. Participants used a Graphical User Interface (GUI) we created for the task, shown in Figure <ref>, and labeled 30 sentences randomly picked from one set. Participants were informed that each sentence could contain no gesture, one gesture, or multiple gestures, and were asked to imagine using appropriate gestures while speaking the sentence. Finally, we asked participants to label words even if the Symbolic or Deictic gestures are not the most representative of the chosen sequence of words. While displaying the GUI, another monitor showed a gif iteratively displaying the gestures and their names, along with Google Translate for translating unknown words if needed. We decided to use Google Translate since the generated sentences were in English to better work with RoBerta, while all the participants we chose were native Italian speakers. In practice, participants did not use the translation tool, suggesting that ChatGPT-generated sentences were easy to understand even by non-native speakers. We assume that using English instead of Italian does not significantly affect the results, given that Symbolic and Deictic gestures are strongly related to speech context, which is not language-dependent. We aim to verify this assumption in future work by repeating the experiments with the native language spoken by participants. We finally ran the three algorithms by using only the reference sentences D defined by one expert each time. For this process, we utilized the “base" version of the Cross-Encoder RoBerta model, which we chose over Bi-encoder models <cit.> due to its superior performance on sentence similarity tasks. When testing the algorithms, the “large" version of RoBerta was the best-performing pre-trained model on the STS benchmark, with the “base" version showing similar performance while being significantly smaller and faster in inference. After running the algorithms, we compared the Average Precision (AP), Intersection Over Union (IOU), and Average Computational Time (ACT) for each gesture and each algorithm. These metrics are defined as follows: AP = ∑_i=1^n (R(t_i) - R(t_i-1)) P(t_i) where t_i is the i-th prediction threshold, R(t_i) is the recall at threshold t_i, P(t_i) is the precision at threshold t_i, and R(t_i-1) is the recall at the previous threshold t_i-1, IOU = Area of Overlap/Area of Union ACT = ∑_i=1^N T_i/N where T_i is the time taken to process the i-th sentence, and N is the number of processed sentences. We ran all the algorithms on the same PC under identical conditions. We repeated the previous steps three times by selecting each time a different th_0 value among {0.3, 0.6, 0.9}. We also tested how the algorithms perform when only one sentence for each gesture is randomly selected for each expert. To compute AP, we set the minimum IOU threshold to consider a prediction valid to 0.5. For computing the total IOU for each gesture, we set the minimum acceptable similarity score for each predicted label to 0.5. y 1 X x N n N ≠ 0 N is even X X × X N N/2 This is a comment N is odd y y × X N N - 1 § RESULTS We computed AP (in %), IOU (in %), and ACT (in seconds) for all the possible th_0 values in the set {0.3, 0.6, 0.9}, using the four D provided by experts, the two sets of sentences generated by exploiting an OpenAI model, and the “base" version of the Cross-Encoder RoBerta model. We show in Table <ref> the AP, IOU, Mean AP, Mean IOU, and ACT values obtained by averaging the results from the four experts' with th_0 = 0.3. Mean AP and Mean IOU, represented in the last column of the table, are defined as the average AP and IOU, respectively, over all the possible gestures, averaged then again by considering the results of the four D sets. From Table <ref>, we observe significant differences in ACT values among the three algorithms: the Baseline algorithm labels sentences almost instantly, the Fixed Window algorithm requires about 3s, while the Moving Window algorithm needs more than 80s for both Sentences A and Sentences B. The higher computational cost of the Moving Window algorithm is due to the need to compute all possible semantic similarity scores before labeling, whereas the Fixed Window algorithm computes only the necessary scores at runtime. The Baseline algorithm, instead, assigns labels with a statistical approach, with similarities scores assigned randomly, without requiring any computation by RoBerta. By comparing the Mean IOU and Mean AP of the Baseline algorithm with the others, we show that semantic information is meaningful to identify when Symbolic and Deictic gestures may be generated. We also obtained higher Mean IOU and Mean AP values when labeling Sentences A instead of Sentences B with the Moving Window and the Fixed Window algorithms, although AP and IOU for individual gestures are not always higher. There is also a noticeable variability in AP and IOU scores related to different gestures using the same algorithm. For example, the Greet gesture achieved 70.96 AP between ground truth and predicted labels of Sentences A when using the Moving Window algorithm, while the No gesture obtained only 0.30. This discrepancy is not due to the different label frequency, as volunteers labeled No and Greet a similar amount of times (45 vs. 42). This suggests that there is a significant variability and possible ambiguity in how people associate Symbolic and Deictic gestures with contexts, even when they belong to the same culture. To verify this fact, we combined all Sentences A and Sentences B labeled multiple times by different participants, and recomputed AP and IOU scores to check for consistency. The results, shown in Table <ref>, are similar to those obtained with the Fixed Window and Moving Window algorithms. Although AP and IOU scores are generally slightly higher, their value remain low, especially when compared to Greet gesture, which achieved the highest IOU (71.43) and AP (25.11) scores. The Mean IOU was 22.88 while Mean AP was 4.68. This outcome support the need for additional metrics to better validate our findings. Increasing th_0 to 0.6 slightly worsened the results. For instance, using the Fixed Window algorithm with Sentences A, Mean AP dropped to 6.32, Mean IOU to 5.36, and ACT increased to 6.14s. With the Moving Window algorithm and Sentences A, Mean AP decreased to 8.57, Mean IOU to 9.92, while ACT remained the same due to the need to compute all the possible semantic similarity scores. The Fixed Window algorithm's increasing ACT and decreasing Mean AP and Mean IOU values indicate that it often fails to find labels with a similarity score greater than th_0, leading to do more computations to label the sentence. This is confirmed by evaluating the same algorithm with th_0 = 0.9: all scores decreased except ACT which became 7.32s. Finally, when selecting a random sentence to represent the gesture's context, both algorithms showed a decrease in ACT values. For Sentences A with the Moving Window algorithm and th_0 = 0.3, ACT reduced from an average of 83.78s to 20.84s. With the Fixed Window algorithm, ACT reduced to 1.06s. However, the Fixed Window algorithm also led to a significant decrease in Mean AP, going from 7.63 to 0.82, and Mean IOU, going from 7.42 to 3.57. The Moving Window algorithm showed less significant drop, with Mean AP decreasing from 9.5 to 7.29 and Mean IOU from 11.73 to 6.49. This indicates that the deeper analysis performed by the Moving Window algorithm allows it to better recognize similarities, even when using a single sentence to represent the context in which a gesture is generated. § CONCLUSIONS In this paper, we presented a rule-based approach leveraging semantic similarity scores to link Symbolic and Deictic gestures with word sequences inside objective sentences. We developed three algorithms: a baseline that labels sentences without semantic consideration, one that uses precomputed fixed windows for fast labeling, and another that tries different windows to better identify semantic dependency. These algorithms require no training data, can be easily controlled and expanded with additional gestures, integrated with other methods, and are suitable for parallel processing to meet real-time constraints. Our approach has also several limitations. Firstly, while the comparison with ground truth labels revealed that semantics plays a key role in identifying Symbolic and Deictic gestures, the association between speech context and these gestures varies significantly across different gestures. This suggests the need to consider additional factors for more accurate identification. Secondly, although AP and IOU metrics provide some performance insights, they do not fully validate the quality of the produced labels. The complexity of the considered problem and the idiosyncratic nature of gestures make AP and IOU values highly variable, even when comparing labels produced by people of the same culture. We plan to explore better subjective and objective metrics in future work. Additionally, our methods rely heavily on the reference sentences provided by the designer, assuming complete and non-overlapping context representation. More advanced LLMs and different rule strategies may enhance performances and better manage ambiguities. Scalability is another challenge; our current methods are suited for a limited number of gestures and reference sentences. In future work, we aim to use Bi-encoders to store embeddings of reference sentences and commonly used sentences for better scaling, and eventually compare the results. Finally, we plan to test our algorithms with other cultures for better validation and implement them within hybrid rule-based and data-driven architectures to generate a wide variety of gestures in artificial agents. We will eventually measure performance using more commonly used metrics to facilitate comparison with other state-of-the-art works. In this paper, we presented a rule-based approach based on the semantic similarity scores between a set of sentences representing the context in which a set of Symbolic and Deictic gestures are produced, and sequences of words inside objective sentences that an artificial agent has to pronounce. We developed three algorithms to test our approach: one serves as a baseline and labels sentences without considering semantics, one computes in advance fixed windows for each gesture to identify labels in real-time configurations quickly, and the latter tries different windows and uses two different checks to identify semantic dependency better. All the algorithms do not require any training data to work, can be easily expanded to work with other gestures, can be easily controlled, can be joined with other methods that are more suitable for other kinds of gestures and can be parallelized to respect real-time constraints, given that different devices can compute semantic similarity scores independently. There are also several limitations that we aim to address in future work. First, while the comparison with ground truth labels revealed that semantics plays a key role in identifying Symbolic and Deictic gestures, it has also shown that the association between speech context with Symbolic and Deictic gestures varies greatly across gestures, leading to thinking that additional factors need to be considered to better identify gestures in utterances. Secondly, while AP and IOU metrics provide insights into the overall performances of the algorithms, they don't fully validate the quality of the produced labels. The complexity of the considered problem and the idiosyncratic nature of gestures make AP and IOU values highly variable even when comparing labels produced by people of the same culture. For this reason, in future work, we aim to find other subjective and subjective metrics that better represent the performances of the rule-based methods and improve their architecture accordingly. Then, the methods we developed greatly depend on the reference sentences provided by the designer: while this choice allows for better control, it also assumes that the contexts chosen by the designer fully represent the situations in which gestures are generated and that there is no intersection between contexts represented by different gestures. While better-performing LLMs may partially address this issue, the implementation of more advanced rules strategies may further improve the results. Finally, our method is thought to be used with few gestures as it can be challenging to scale it to a large number of contexts and sentences. In future work, we aim to use Bi-encoders to store the embedding of reference sentences and sentences that appear frequently to improve the scaling capabilities. The last step for future work consists of testing the algorithms in hybrid rule-based and data-driven architectures to generate all types of gestures in artificial agents and compare the results with other works. § ACKNOWLEDGMENT This work was carried out within the framework of the project "RAISE - Robotics and AI for Socio-economic Empowerment” Spoke 2 and has been supported by European Union – NextGenerationEU. 99 c1 G. O. Young, ÒSynthetic structure of industrial plastics (Book style with paper title and editor),Ó in Plastics, 2nd ed. vol. 3, J. Peters, Ed. New York: McGraw-Hill, 1964, pp. 15Ð64. c2 W.-K. Chen, Linear Networks and Systems (Book style). Belmont, CA: Wadsworth, 1993, pp. 123Ð135. c3 H. Poor, An Introduction to Signal Detection and Estimation. New York: Springer-Verlag, 1985, ch. 4. c4 B. Smith, ÒAn approach to graphs of linear forms (Unpublished work style),Ó unpublished. c5 E. H. Miller, ÒA note on reflector arrays (Periodical styleÑAccepted for publication),Ó IEEE Trans. Antennas Propagat., to be publised. c6 J. Wang, ÒFundamentals of erbium-doped fiber amplifiers arrays (Periodical styleÑSubmitted for publication),Ó IEEE J. Quantum Electron., submitted for publication. c7 C. J. Kaufman, Rocky Mountain Research Lab., Boulder, CO, private communication, May 1995. c8 Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ÒElectron spectroscopy studies on magneto-optical media and plastic substrate interfaces(Translation Journals style),Ó IEEE Transl. J. Magn.Jpn., vol. 2, Aug. 1987, pp. 740Ð741 [Dig. 9th Annu. Conf. Magnetics Japan, 1982, p. 301]. c9 M. Young, The Techincal Writers Handbook. Mill Valley, CA: University Science, 1989. c10 J. U. Duncombe, ÒInfrared navigationÑPart I: An assessment of feasibility (Periodical style),Ó IEEE Trans. Electron Devices, vol. ED-11, pp. 34Ð39, Jan. 1959. c11 S. Chen, B. Mulgrew, and P. M. Grant, ÒA clustering technique for digital communications channel equalization using radial basis function networks,Ó IEEE Trans. Neural Networks, vol. 4, pp. 570Ð578, July 1993. c12 R. W. Lucky, ÒAutomatic equalization for digital communication,Ó Bell Syst. Tech. J., vol. 44, no. 4, pp. 547Ð588, Apr. 1965. c13 S. P. Bingulac, ÒOn the compatibility of adaptive controllers (Published Conference Proceedings style),Ó in Proc. 4th Annu. Allerton Conf. Circuits and Systems Theory, New York, 1994, pp. 8Ð16. c14 G. R. Faulhaber, ÒDesign of service systems with priority reservation,Ó in Conf. Rec. 1995 IEEE Int. Conf. Communications, pp. 3Ð8. c15 W. D. Doyle, ÒMagnetization reversal in films with biaxial anisotropy,Ó in 1987 Proc. INTERMAG Conf., pp. 2.2-1Ð2.2-6. c16 G. W. Juette and L. E. Zeffanella, ÒRadio noise currents n short sections on bundle conductors (Presented Conference Paper style),Ó presented at the IEEE Summer power Meeting, Dallas, TX, June 22Ð27, 1990, Paper 90 SM 690-0 PWRS. c17 J. G. Kreifeldt, ÒAn analysis of surface-detected EMG as an amplitude-modulated noise,Ó presented at the 1989 Int. Conf. Medicine and Biological Engineering, Chicago, IL. c18 J. Williams, ÒNarrow-band analyzer (Thesis or Dissertation style),Ó Ph.D. dissertation, Dept. Elect. Eng., Harvard Univ., Cambridge, MA, 1993. c19 N. Kawasaki, ÒParametric study of thermal and chemical nonequilibrium nozzle flow,Ó M.S. thesis, Dept. Electron. Eng., Osaka Univ., Osaka, Japan, 1993. c20 J. P. Wilkinson, ÒNonlinear resonant circuit devices (Patent style),Ó U.S. Patent 3 624 12, July 16, 1990. c1 R. M. Krauss, Y. CHEN, P. Chawla, Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us?, Advances in experimental social psychology, Academic Press, 1996, p. 389-450. c2 S. Buisine, S. Abrilian, J. C. Martin, Evaluation of Multimodal Behaviour of Embodied Agents: Cooperation between Speech and Gestures. From brows to trust: evaluating embodied conversational agents, 2004, p. 217-238. c3 N. C. Krämer, N. Simons, S. Kopp, The effects of an embodied conversational agent’s nonverbal behavior on user’s evaluation and behavioral mimicry, Intelligent Virtual Agents: 7th International Conference, IVA 2007 Paris, France, September 17-19, 2007 Proceedings 7, Springer Berlin Heidelberg, 2007, p. 238-251. c4 M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, F. Joublin, To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability, International Journal of Social Robotics, 5, 2013, p. 313-323. c5 P. Bremner, A. G. Pipe, C. Melhuish, M. Fraser, S. Subramanian, The effects of robot-performed co-verbal gesture on listener behaviour, 2011 11th IEEE-RAS International Conference on Humanoid Robots, 2011, p. 458-465. c6 J. R. Wilson, N. Y. Lee, A. Saechao, S. Hershenson, M. Scheutz, L. Tickle-Degnen, Hand gestures and verbal acknowledgments improve human-robot rapport, Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, pp. 334-344. c7 D. McNeill, Hand and mind: What gestures reveal about thought, University of Chicago press, 1992. c8 P. Ekman, W. V. Friesen, The repertoire of nonverbal behavior: Categories, origins, usage, and coding. semiotica, 1969, 1.1: 49-98. c9 S. Kita, Cross-cultural variation of speech-accompanying gesture: A review, Language and cognitive processes, 2009, 24.2: 145-167. c10 D. Matsumoto; H. C. Hwang, Cultural similarities and differences in emblematic gestures, Journal of Nonverbal Behavior, 2013, 37: 1-27. c11 H. Zhang, C. Yu, and A. Tapus, Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression, International Conference on Social Robotics, 2022, pp. 110-119. c12 D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, L. Specia, Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation, arXiv preprint arXiv:1708.00055, 2017. c13 Y. Liu, et al., Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. c14 I. Poggi, Symbolic gestures: The case of the Italian gestionary, Gesture, 2002, 2.1: 71-98. c15 J. Pages, L. Marchionni, F. Ferro, Tiago: the modular robot that adapts to different research needs, International workshop on robot modularity, IROS, 2016. c16 G. Lentini, et al., Alter-ego: a mobile robot with a functionally anthropomorphic upper body designed for physical interaction, IEEE Robotics & Automation Magazine, 2019, 26.4: 94-107. c17 J. Cassell, H. H. Vilhjálmsson, T. Bickmore, Beat: the behavior expression animation toolkit, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001, pp. 477-486. c18 A. K. Pandey, R. Gelin, A mass-produced sociable humanoid robot: Pepper: The first machine of its kind, IEEE Robotics & Automation Magazine, 2018, 25.3: 40-48. c19 V. Ng-Thow-Hing, P. Luo, S. Okita, Synchronized gesture and speech production for humanoid robots, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 2010, pp. 4617-4624. c20 G. Ali, M. Lee, J. I. Hwang, Automatic text‐to‐gesture rule generation for embodied conversational agents, Computer Animation and Virtual Worlds, 2020, 31.4-5: e1944. c21 J. Pennington, R. Socher, C. D. Manning, Glove: Global vectors for word representation, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, p. 1532-1543. c22 Y. Ferstl, M. Neff, R. Mcdonnel, ExpressGesture: Expressive gesture generation from speech through database matching, Computer Animation and Virtual Worlds, 2021, 32.3-4: e2016. c23 Y.J. Chae, C. Nam, D. Yang, H. Sin, C. Kim, S. K. Park, Generation of co-speech gestures of robot based on morphemic analysis, Robotics and Autonomous Systems, 2022, 155: 104154. c24 I. Habibie, M. Elgharib, K. Sarkar, A. Abdullah, S. Nyatsanga, M. Neff, C. Theobalt, A motion matching-based framework for controllable gesture synthesis from speech, ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp. 1-9. c25 I. Goodfellow, et al., Generative adversarial nets, Advances in neural information processing systems, 2014, 27. c26 A. Vaswani, et al., Attention is all you need, Advances in neural information processing systems, 2017, 30. c27 J. Ho, A. Jain, P. Abbel, Denoising diffusion probabilistic models, Advances in neural information processing systems, 2020, 33: 6840-6851. c28 A. Gjaci, C. T. Recchiuto, A. Sgorbissa, Towards Culture-Aware Co-Speech Gestures for Social Robots, International Journal of Social Robotics, 2022, 14.6: 1493-1506. c29 Y. Yoon, B. Cha, J. H. Lee, M. Jang, J. Lee, J. Kim, G. Lee, Speech gesture generation from the trimodal context of text, audio, and speaker identity, ACM Transactions on Graphics (TOG), 2020, 39.6: 1-16. c30 U. Bhattacharya, N. Rewkowski, A. Banerjee, P. Guhan, A. Bera, D. Manocha, Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents, 2021 IEEE virtual reality and 3D user interfaces (VR), IEEE, 2021. p. 1-10. c31 N. Gao, Z. Zhao, Z. Zeng, S. Zhang, D. Weng, Gesgpt: Speech gesture synthesis with text parsing from gpt. arXiv preprint arXiv:2303.13013, 2023. c32 L. B. Hensel, N. Yongsatianchot, P. Torshizi, E.Minucci, S. Marsella, Large language models in textual analysis for gesture selection, Proceedings of the 25th International Conference on Multimodal Interaction, 2023, pp. 378-387. c33 N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks, arXiv preprint arXiv:1908.10084, 2019.
http://arxiv.org/abs/2407.01833v1
20240701221602
Thermal Properties of Current Sheet Plasmas in Solar Flares
[ "Tingyu Gou", "Katharine K. Reeves" ]
astro-ph.SR
[ "astro-ph.SR" ]
Tingyu Gou tingyu.gou@cfa.harvard.edu 0000-0003-0510-3175]Tingyu Gou 0000-0002-6903-6832]Katharine K. Reeves Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA § ABSTRACT The current sheet is an essential feature in solar flares and is the primary site for magnetic reconnetion and energy release. Imaging observations feature a long linear structure above the candle-flame-shaped flare loops, which resembles the standard flare model with the current sheet viewed edge-on. We investigate the thermal properties of plasmas surrounding the linear sheet during flares, using EUV observations from the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO). The differential emission measure (DEM) analyses show evidence of high temperatures in the plasma sheets (PSs), containing hot emissions from only a narrow temperature range, suggestive of an isothermal feature. The sheet's temperature remains constant at different heights above the flare arcade, peaking at around =7.0–7.1; while the well-studied 2017 September 10 X8.2 flare exhibits as an exception in that the temperature decreases with an increasing height and peaks higher (=7.25) during the gradual phase. Most PS cases also hold similar emission measures and thicknesses; while the PS's emissions drop exponentially above the flare arcade, the sheet thicknesses show no significant height association as for all the measurements. The characteristics of isothermal and steady temperature suggests balanced heating and cooling processes along the current sheet, particularly additional heating may exist to compensate for the conductive and radiative cooling away from the reconenction site. Our results suggest a steady and uniform sheet structure in the macroscopic scale that results from flare reconnection. § INTRODUCTION Solar flares are explosive and energetic phenomena in the solar atmosphere, which can release a huge amount of magnetic energy within a short time. The energy release process is usually attributed to magnetic reconnection that occurs at the current sheet in the wake of the eruption <cit.>. The classic flare model in two dimensions <cit.> features a linear structure connecting the bottom of an erupting magnetic flux rope and the tip of candle-flame-shaped flare loops underneath, which shows an eruptive picture with a current sheet viewed from an edge-on perspective <cit.>. During the eruption, magnetic inflows are continuously brought into the current sheet, where magnetic reconnection occurs and produces bi-directional outflows moving upward or downward along the sheet <cit.>. Meanwhile, magnetic free energy stored in the coronal magnetic field is rapidly released and converted into thermal and kinetic energies that are used for plasma heating and particle acceleration in the flare. Thus, the current sheet is an essential feature to power flares, and its properties are important to understand the magnetic reconnection and associated energy release processes. In remote-sensing observations of solar flares, currently it is not possible to observe the electric currents in the corona, but the plasmas surrounding the current layer can give a sense of the current structure itself thus are directly relevant to the reconnection process. During eruptive flares, imaging observations sometimes feature a thin, linear structure in white light, EUV, or X-ray passbands <cit.>, inside which upward or downward moving outflows are also detected <cit.>. These observations resemble the standard flare model and provide clear indications for the existence of a reconnection current sheet viewed edge-on. When an eruption is viewed face-on, the current sheet region exhibits as a broad fan of diffusive plasmas above the post-flare arcade, where dark voided structures known as supra-arcade downflows are observed to intermittently move toward the flare arcade <cit.>. These observations provide a complementary insight into the flare current sheet. Thermal diagnostics from multi-wavelength observations reveal that the plasmas surrounding flare current sheet, exhibiting as either a long thin sheet or a supra-arcade fan, are associated with high temperatures <cit.>. These structures are mostly visible in hot EUV channels such as the 131 Å (primarily from Fe XXI emission line, =7.05) and/or the 193 Å (with contributions from Fe XXIV, =7.25) channels of the Atmospheric Imaging Assembly <cit.> onboard the Solar Dynamics Observatory <cit.>. Spectroscopic observations on the current sheet region show excess broadening in hot spectral lines associated with non-thermal velocities <cit.>. Although observations of current sheet cases are still rare, these results suggest the existence of heating and turbulent processes within the current sheet, which are immediately driven by magnetic reconnection. Magnetic reconnection in flares can occur in a bursty or fractal fashion with highly time-dependent characteristics. Theoretical and numerical studies demonstrate that the current sheet during fast reconnection can be highly fragmental and turbulent, with continuous formation and injections of small magnetic islands, also termed plasmoids <cit.>. These plasmoids may play an important role in modulating the rate of magnetic reconnection and energy transfer as multiple reconnection sites are generated during the thinning of current sheet <cit.>. In observations, efforts are made to understand the fast reconnection process in solar flares. Studies using high-resolution imaging from SDO/AIA reveal sub-structures of a flaring plasma sheet where a linear structure breaks up into multiple plasmoids <cit.>, and the ejection and coalescence of plasmoids in various scales suggest a fractal fashion of the current sheet <cit.>. However, direct imaging of plasmoids in flaring current sheets are very rare, much fewer than the cases of current sheet candidates in EUV, although the tearing mode instability is expected to occur as the critical value of length-to-thickness ratio has been reached <cit.>. On the other hand, the apparent thickness of current sheet in observations is much greater than that in theoretical and numerical studies <cit.>, which require a microscopic plasma scale to produce the anomalous resistivity. These different features between models and observations appeal for more detailed investigations on the flare current sheet. In this study, we focus on the long, thin plasma sheet structures observed in EUV during eruptive solar flares. High-resolution imaging by SDO/AIA features a linear plasma sheet (PS) above the flare loops in some flare events, especially for those occurring near the solar limb. Simultaneous multi-wavelength observations of AIA allow us to study the thermal properties of plasmas surrounding the flare current sheet. A small number of individual case studies have been reported before, and most of them focus on the extremely long and dense PS feature observed during the 2017 September 10 X8.2 flare owing to its favourable viewing angle and rich observational data from various instruments. In this work, we search and investigate PSs observed in a collection of several flares which show prominent bright emission features. Our results show that most PS features exhibit similar thermal properties while the 2017 flare is an exception. Our analyses suggest an isothermal and uniform sheet structure at macroscopic scales in spite of its possible turbulent and fragmented characteristics at the micro level. We present the method and detailed analyses in Sections <ref> & <ref> and discuss the results in Section <ref>. § EVENTS AND METHODS We study the properties of flare PS structures using EUV observations from SDO/AIA. Favorable flare events are selected that show a long, linear feature above post-flare loops in hot AIA passbands such as 131 Å. Such events require an almost edge-on perspective of the AIA instrument where the line of sight is largely parallel to the axis of the post-flare arcade. Favorable observations showing long PS structures are limited. Here we investigate sheet structures nicely observed in five solar flares, which are shown in Figure <ref> and listed in Table <ref>. All of these events are intense X-class, long-duration flares occurring near the solar limb, and they all are eruptive events associated with fast CMEs. During the flares under study, the long PS features are most evident shortly after the soft X-ray (SXR) flux peaks, i.e., during the early decay phase of the long-duration flares (PSs #1–5; Figure <ref>, Table <ref>). The linear structure is sometimes also visible during the flare impulsive phase, which connects the upper tip of cusp-shaped flare loops and the bottom of an erupting flux rope. Among two of the five flares, we select two PS cases during the impulsive phase, which are clearly identifiable in AIA 131 Å images and have little overlap with either leg of the erupting flux rope (PSs #6 & 7; Figure <ref>). We also carefully check the EUV images to avoid any overlap between the PSs and the diffraction patterns from the bright flare loops off the mesh of the AIA instrument, the latter of which is always inevitable in such intense flares. The seven PS cases under study are shown in Figure <ref>. SDO/AIA provides high-spatial resolution (with a pixel size of 0.6, ∼0.4 Mm) and simultaneous multi-wavelength EUV observations, which makes it possible to study the thermal properties of plasmas surrounding the flare current sheets. We adopt the differential emission measure (DEM) method to diagnose the temperature structure of flare plasmas, using data from six AIA EUV channels, i.e., 131 Å, 94 Å, 335 Å, 193 Å, 211 Å, and 171 Å. To reduce the diffraction pattern of AIA telescopes and effects of other stray lights, the AIA data are further processed to level 1.6 by deconvolving images with the instrument point spread function (PSF) before the DEM calculation (see, e.g., Figure <ref>). We apply the modified sparse inversion DEM code <cit.> which effectively constrains hot flare emissions, and calculate emission measures (EMs) in the temperature range of = 5.5–7.6 with an interval of Δlog T = 0.05. We generate a temperature map by deriving the EM-weighted mean temperature over the whole temperature range (= 5.5–7.6): ⟨ T⟩=∑_iEM(T_i) × T_i/∑_iEM(T_i). The DEM distribution of optically thin plasmas during a flare usually contains more than one components, where the cooler one is mostly contributed from the foreground and background and the hotter one is from the flaring plasma itself <cit.>. To reduce the effects of background contributions, we define a mean temperature of the PS structure, , by only focusing on the hot component (T=a–b) of each PS case <cit.>: ⟨ T⟩_PS=∑_i=a^bEM(T_i) × T_i/∑_i=a^bEM(T_i). Similarly, we calculate the clean emissions from the PS, , by summing up EMs from only the hot component: EM_PS=∑_i=a^bEM(T_i). The specific temperature range of the hot DEM component (T=a–b) for each PS case differs, and they are determined in Section <ref> accordingly. The emissions from the PS structure decrease greatly with an increasing height above the flare arcade. To investigate the height distribution, we use exponential functions to fit both the AIA intensity and profiles, based on an exponential distribution of the density n_e with height h in the solar corona, n_e (h, T) = n_e0 exp[-h/H(T)], where H(T) is the density scale height. While the observed emissions sum up all density contributions along the line of sight l if assuming a fully ionized plasma in the corona, EM(T) = ∫ n_e^2(T,l)dl, we obtain an exponential distribution of the observed AIA intensity (I_AIA) and EM in the form of I_AIA ∼ EM ∼ exp [-2h/H ]. We investigate the height distribution of PS emissions in terms of the EM scale height H. The flare loop-top region is usually associated with very dense plasmas around the flare peak, where the DEM inversion would fail to obtain reasonable solutions due to its application for optically thin corona. In addition, to better resolve the long PS structures, we use AIA 131 Å data observed with long exposures, which are often saturated in flare loops with diffraction patterns (e.g., Figure <ref>b). The DEM results from these regions should also be ignored. Here in our study, we only focus on the PS region above the flare loop-top. To better visualize, we rotate the AIA maps to place the PS structures either horizontally or vertically in our analyses in the following sections. § ANALYSES AND RESULTS §.§ 2013 May 13 X2.8 flare (PS#1) The 2013 May 13 X2.8 flare occurs in NOAA AR 11748 at the northeast solar limb. The event starts at 15:48 UT and peaks at 16:05 UT, followed by a long-duration (>4 hrs) gradual phase. The impulsive phase of the flare is associated with the eruption of a magnetic flux rope, which is observed in AIA 131 Å as a hollow, elliptical feature with its bottom connecting to the cusp-shaped flare loops underneath by a linear structure <cit.>. The eruptive picture is in good agreement with the standard flare model, and the AIA observation provides a nice edge-on view of the limb flare. Shortly after the flare SXR peak, a thin PS appears above the post-flare loops. The PS feature is observable in AIA 131 Å for more than two hours, where a number of supra-arcade downflowing loops are detected as a signature of ongoing magnetic reconnection during the long-duration flare gradual phase <cit.>. §.§.§ Temperature Range We study properties of the PS at 16:32 UT when it is most evident in AIA (Figure <ref>). The long thin feature is most visible in AIA 131 Å and partially in 193 Å, indicative of hot properties. This characteristic is evidenced by the DEM results in Figure <ref>, where the linear PS feature is absent in EM maps with temperatures log T<6.9 and only visible in =6.9–7.3, while only the cusp tip of flare loops appears in temperatures of log T>7.3. We further compare the DEM distributions sampled from the PS and nearby reference locations (Figure <ref>g,h). The DEM profiles show that the PS samples have similar, cool EM components at <6.7 to the references, but contain a much higher EM peak at =6.9–7.3, providing further evidence that the PS feature in the flare contains mostly hot plasmas, and the cool DEM components are mainly due to the foreground and background emissions. Thus, the DEM results evidence a narrow temperature range of hot plasmas associated with the long sheet, i.e., =6.9–7.3 for this case (PS#1), which suggests a relatively isothermal feature. §.§.§ PS Length Based on the PS's temperatures determined from the DEM analysis, i.e., the narrow temperature range of the hot DEM component, we calculate the pure emission and the mean temperature for the PS; for the whole region we calculate the conventional mean temperature (Figure <ref>). The EM and mean temperature maps show a linear structure above the post-flare loops similar to the AIA 131 Å observation. To investigate the PS properties at different heights, we plot the AIA intensity, , and at a virtual slit along the PS structure; another parallel slit is placed nearby as a reference (the two dotted lines in Figure <ref>a–c). The AIA 131 Å intensity is strong at the bottom of PS with emissions from overlapped flare loops, and then decreases smoothly with increasing heights, giving a scale height of H=44 Mm. The profile decreases sharply at first, but reaches a plateau after x≈35 Mm with the EM of ∼ 2×10^27 cm^-5, and finally drops to the background level of about 10^26 cm^-5. We use two exponential functions to fit the two-episode EM variation separately, which exhibits a scale height of ∼20 and 60 Mm, respectively, suggestive of different density distributions at different heights. The temperature (the black curve in Figure <ref>f) is about 12–14 MK along the PS and is significantly higher than the conventional mean temperature (about 6–8 MK) in Figure <ref>c, the latter of which underestimates the PS temperature by counting in significant cool foreground and background emissions along the line of sight <cit.>. For the reference BG, the missing emissions in AIA 131 Å at around x=10 Mm, and the resultant zeros in EM and mean temperature (gray curves in Figure <ref>d–f; also Figure <ref>c), are resulted from the removal of diffraction patterns by PSF deconvolving (Figure <ref>). To investigate the detailed temperature structure at different heights, we plot the DEM distribution over temperature as a function of distance along the PS (Figure <ref>). Only the temperature range of =5.8–7.6 is plotted in the figure as cool EMs from <5.8 are much fewer (Figure <ref>g,h). Comparing the DEM distributions from the PS and nearby BG, the cool EM components at =6.1–6.7 are similar in magnitude, but the hot EM component only appears along the PS. Below the PS (x≲12 Mm), the hot component contains EMs from a wider temperature range (Figure <ref>a), which includes contributions from overlying flare loops and the cusp tip (Figure <ref>c,f). The most interesting feature is that the temperature almost keeps the same along the PS, i.e., EMs are all from the narrow temperature range of =6.9–7.3 (between the two horizontal dashed lines in Figure <ref>a), although at x≳70 Mm the temperature drops a little where the AIA intensity is low at the upper tip. According to the well-defined temperature range of hot plasmas, we obtain the length of PS, which is about 70 Mm as indicated by two vertical lines in Figure <ref>a. The measured length from the DEM distribution agrees with the variations of AIA intensity and along the PS, which are distinctly higher than the BG level (Figure <ref>d,e). Considering that the upper part could be too faint to observe, the real length of PS will be longer. By summing up the total EMs along the whole length of the PS structure (within the two vertical lines in Figure <ref>a), we obtain a peak temperature of =7.1 (about 12.6 MK; Figure <ref>b) for the PS, in agreement with in Figure <ref>f. The decreases from about 3×10^28 to 2×10^26 cm^-5 along the PS, which corresponds to a plasma density of about 1.6×10^9–1.3×10^8 cm^-3, assuming the line-of-sight depth of ∼120 Mm based on stereoscopic observations from two satellites <cit.>. §.§.§ PS Thickness We also investigate the apparent thickness of the PS structure. We place four parallel slits at different heights across the PS and plot the profiles of AIA 131 Å intensity, , and mean temperature (Figure <ref>). The PS is clearly associated with enhanced emissions and temperatures similar to a Gaussian shape. To measure the PS thickness observed in AIA 131 Å, we use a Gaussian function plus a second-order polynomial to fit the intensity profile, the latter of which indicates background emissions off the PS <cit.>. We use 2σ width of the Gaussian fit as the thickness of PS (blue shaded regions in Figure <ref>), where the intensity is significantly higher than the background. The PS thickness observed in AIA 131 Å is about 3–5 Mm in this case, and the value is generally smaller at a larger height. We plot the DEM distribution over temperature as a function of distance across the PS (Figure <ref>). We can always see the background EM component at temperatures of =6.1–6.7, which changes little along the slits. The most prominent feature is the narrow EM component in the temperature range of PS, =6.9–7.3, whose two edges give a good estimation of the PS thickness by containing only hot plasmas. We similarly use a Gaussian function plus a second-order polynomial to fit the curve and obtain the 2σ width of Gaussian fitting (in green in Figure <ref>), where the fitted width is consistent with the location of the narrow, hot EM component from the PS (Figure <ref>a–d). The thickness measured from DEM results is about 3–4 Mm, slightly smaller than those from AIA 131 Å intensity, and shows small differences for different heights. §.§ 2017 September 10 X8.2 flare (PS#2) The X8.2 flare on 2017 September 10 occurs in AR 12673 above the solar west limb, and exhibits a `textbook' eruption bearing a striking resemblance to the standard picture. A long sheet structure forms in the wake of an erupting flux rope, and the formation and dynamics of the PS feature have been studied in a number of papers using various observations from different instruments, including multi-wavelength imaging, and EUV and radio spectroscopies <cit.>. The PS structure during the flare gradual phase is observed to extend beyond the AIA's FOV <cit.>, and it is visible in all six EUV channels of AIA due to continuum emissions <cit.>. We use DEM analysis to study the PS structure observed at 16:41 UT and show the results in Figures <ref>&<ref>. We similarly place two parallel virtual slits on the PS structure and its nearby location, respectively, to examine the temperature structure and its distribution with height (Figures <ref>). We skip the saturated flare loops underneath and the slit is completely on the PS structure itself, which has a length of >120 Mm in AIA's FOV. The DEM distribution of PS shows a single hot component from temperatures of = 6.85–7.5, which we use as the temperature range of PS#2, and contains few emissions from lower temperatures (Figure <ref>g,h); while the nearby BG shows a cooler EM component from = 6.7–7.1 (the gray curve in Figure <ref>h). The DEM distribution is consistent with the results in <cit.> (e.g., Figure 8) using a different DEM inversion method. The narrow temperature range of EM suggests an isothermal feature of the plasmas surrounding the flare current sheet. By examining the DEM distribution at different heights along the PS, one can see that the plasma temperatures decrease slightly from = 7.1–7.5 to 6.85–7.2 with increasing heights above the flare arcade (Figure <ref>g). This result is different from the 2013 May 13 flare that show constant temperatures at different heights (PS#1 in Figure <ref>). The integration of EMs along the CS gives a peak temperature of =7.25 for the plasmas, which is generally hotter than in the previous case (PS#1). The changes between 2×10^29 and 10^27 cm^-5 (Figure <ref>e), which is one magnitude denser than the PS#1. The distribution at different heights first decreases sharply following an exponential scale height of H=32 Mm, while the second part (e.g., x>70 Mm) shows a much slower descending with increasing heights (H=102 Mm). The fast-to-slow descending trend in EM agrees with the results in <cit.> of the same event, but different from that in AIA 131 Å intensity, which decreases smoothly all the way along the PS (Figure <ref>d). We further measure the PS's thickness by placing four parallel slits separating in heights across the PS and fitting the AIA and profiles (Figure <ref>). The DEM distribution shows constant background emissions at =6.75–7.0, and hotter emissions only from the PS feature; the latter shows decreasing temperatures with increasing heights. The thickness of PS is measured as about 6–9 Mm in AIA 131 Å intensity, and about 6–7 Mm in DEM, which is twice of the value in the previous case. For comparison, <cit.> measured the PS's thickness as ∼10 Mm based on the total EMs and <cit.> measured as 7–11 Mm from the spectral line intensity and non-thermal broadening for the same flare at different times. §.§ 2023 February 17 X2.3 Flare (PS#3) The X2.3 flare on 2023 February 17 occurs in AR 13229 near the solar northeastern limb. The observational perspective of AIA is not perfectly along the axis of the flare arcade but has a slight tilt, thus gives a different morphology of PS from an ideal linear feature. During the flare gradual phase, the emissions above the flare arcade in AIA 131 Å exhibit several bright spikes surrounded by less bright plasmas (PS#3 in Figure <ref>), forming a supra-arcade fan similar to those viewed face-on. Therefore, this event provides a complement for other edge-on cases. We focus on one of the longest spikes observed at 20:45 UT, which serves as a good representative for the PS feature above flare loops. We perform DEM analysis for this PS case and plot the results in Figures <ref> & <ref>. Since the structure is curved a little as it extends upward, we use two parallel curved slits to sample the PS and its nearby reference location. The DEM distribution from the PS shows two EM components from temperatures of = 6.1–6.6 and 6.9–7.4, respectively, which generally agrees with that from the BG but shows a much greater hot EM component than BG (Figure <ref>g,h). This high-temperature enhancement is also evidenced in the DEM distributions from the sample locations across the PS (Figure <ref>d), which show two EM components with similar temperatures all across the PS but having more hot emissions in the middle. The DEM distributions suggest that the plasma structure above the flare arcade holds similar temperatures, and the bright spike contains dense hot plasmas from temperatures of =6.9–7.4. Another notable feature is that the temperatures are similar at different heights along the PS (Figure <ref>g), which is consistent with the 2013 flare case (PS#1; Figure <ref>) but not the 2017 one (PS#2; Figure <ref>). For the PS's length in this case, the lower point of the structure is difficult to determine as it overlaps on the flare loop-top with an oblique viewing angle and the temperatures are also similar. We generally select a sudden drop in the curve as the start of PS and measure the observed length in AIA as about 118 Mm (Figure <ref>). The total EM along the PS exhibits a peak temperature of =7.1, in agreement with (Figure <ref>f,h). Unlike the temperature, the AIA 131 Å intensity and along the PS descend intensively. For this case, the falls over height with a smooth trend similar to the AIA 131 Å intensity, but different from the fast-to-slow descending in PSs #1&2. The hot EMs from PS are in the same magnitude with PS#1. The PS's thickness, measured by placing four parallel slits across the structure, is about 3–6 Mm (Figure <ref>). §.§ 2014 February 25 and 2013 May 14 Flares We study two more flares which show a long PS feature during the gradual phase. The X4.9 flare on 2014 February 25 occurs near the solar southeast limb (PS#4; Figure <ref>). Due to a slightly oblique view angle, the sheet structure above the flare arcade exhibits as a three-dimensional (3D) feature with some faint emissions on the either side of a linear structure. The formation and development of PS in the flare was studied in detail in <cit.>. We investigate the long PS feature observed at 01:08 UT shortly after the flare peak. The DEM analysis shows that the emissions of PS are from temperatures at =6.8–7.25 with a peak at =7.0, and the temperatures generally remain constant at different heights along the PS (Figure <ref>). The and AIA intensity fall over height following similar scales. The measured length is about 162 Mm (Figure <ref>) and the thickness is about 2–6 Mm (We didn't show the figures of thickness measurements for this case and following ones, but all measurements are presented in Section <ref>). The X3.2 flare on 2023 May 14 occurs near the solar northeast limb in AR 11748 (PS#5; Figure <ref>), in the same AR with the 2013 May 13 flare (PS#1) but several hours later. The flare exhibits a linear PS feature above post-flare arcade during the gradual phase as well, although the view angle is a little more oblique than PS#1. For this case, the diffraction pattern and overlapping loops have an impact on the PS feature, but its properties are still accessible. The DEM analysis shows that the temperature of PS is in the narrow range of =6.9–7.3 and peaks at =7.05 (Figure <ref>). The temperature is also similar at different heights along the PS, while the upper part seems too faint to be resolved. Since the bottom of PS overlaps largely on flare loops, we only fit the upper portion of the curve, which holds a scale height of ∼70 Mm. We obtain the PS's length of about 99 Mm and the thickness of about 2–5 Mm. §.§ PSs During Flare Impulsive Phase We study the PS feature observed during the impulsive phase of flares and compare with that during the gradual phase. The 2017 September 10 flare during the impulsive eruption exhibits a thin, linear PS feature in AIA 131 Å, which connects the bottom of an erupting flux rope and the tip of cusp-shaped flare loops underneath, resembling the standard flare model (PS#6; Figure <ref>). We analyze the PS structure observed at 15:55 UT when the trailing part of the erupting flux rope, an inverted-Y shape, is still visible in the AIA's FOV. We obtain a narrow temperature range of =6.9–7.25 and a peak temperature of =7.05 for the PS (Figure <ref>), which is generally cooler than that during the gradual phase of the same flare (PS#2; Figure <ref>). The temperatures for this case remain almost constant at different heights along the PS, which is also different from PS#2 during the gradual phase showing decreasing temperatures with increasing heights. The PS's length measured between the cusp tip of flare loops and the bottom of inverted-Y shape is about 104 Mm, which gives a complete length of the current sheet region. The PS's thickness is about 1–4 Mm, significantly thinner than during the gradual phase. The EMs for the PS are between ∼4×10^25 and 2×10^27 cm^-5, which is about two orders smaller in magnitude than PS#2. Unlike the gradual phase cases, the EM and AIA 131 Å intensity during impulsive phase do not show an exponential drop-off over height, but exhibit fluctuations while descending slowly, suggestive of nonuniform plasma densities along the sheet (Figure <ref>d,e). During the impulsive phase of the 2023 February 17 flare, a thin PS feature is observed in AIA 131 Å in the wake of the eruption of a plasma cloud and connects to the cusp-shaped flare loop system underneath (PS#7; Figure <ref>). We study the PS structure observed at 20:06 UT before the peak in SXR flux of the flare. The DEM distribution shows PS emissions from temperatures at =6.75–7.3 with a peak of at =7.0 (Figure <ref>), which is slightly cooler than that during the gradual phase of the same flare (PS#3; Figure <ref>). The surrounding locations also contain a hot EM component from temperatures of =6.7–7.1 (the gray curve in Figure <ref>h). The temperatures are generally similar at different heights along the PS, with a slight enhancement near the bottom. We measure the length as about 77 Mm and a thickness of about 1–3 Mm, showing that the plasma sheet is thinner than during the gradual phase. The EMs are about 10^26–10^28 cm^-5, which is similar to the gradual phase case but could also include the contribution from a cloud of hot plasmas surrounding the PS feature (see e.g., Figure <ref>b). Fluctuations are clearly seen in and AIA intensity profiles. For example, a notable peak occurs at the slit position of x=15-30 Mm in both AIA and (Figure <ref>d,e), which is also associated with an enhancement in temperature (Figure <ref>f,g). § DISCUSSION We investigate seven PS features observed during five solar limb flares and analyze their physical properties, including the temperature structure, emission measure, length, and thickness. By isolating cool emissions from the foreground and background, we are able to obtain the accurate temperature and emissions from the PS itself, and measure its parameters based on the pure emissions. We list a summary of the measurements in Table <ref>. The long, thin PS features are basically only prominent in hot AIA channels, and our DEM analyses provide evidence of high-temperature plasma emissions associated with PSs. All the PS structures under study almost contain only a narrow temperature range of hot emissions, suggestive of an isothermal feature for the plasmas surrounding the flare current sheet. Most of the PS features (6 out of 7 cases) exhibit similar properties, which are associated with a high temperature of log T_ peak=7.0–7.1, a EM range of 10^26-10^28 cm^-5, and a mean thickness of 2–4 Mm. The only different case is PS#2, the one observed during the gradual phase of the 2017 September 10 flare, which is distinctly hotter (log T_ peak=7.25), denser (one magnitude higher in ), and thicker (∼7 Mm) than the others. We note the 2017 September flare also holds a much higher level of peak SXR flux (GOES-class X8.2) and a much faster CME (>3000 km/s) than other cases (Table <ref>). Another notable feature is that the PS structures exhibit almost constant temperatures at different heights, except for, again, the PS#2, whose temperature declines slightly with an increasing height above the post-flare loops. The flares under study include those having slight different view angles from the edge-on perspective (e.g., PSs #3, #4, #5), which do not show a perfect linear feature like PS#1 and PS#2 but exhibit as an interesting 3D structure surrounded by some faint emissions in the bottom (Figure <ref>). Our results show PS features with similar temperatures in spite of the differences in view angle and line-of-sight depth. Our study also includes two homologous flares from similar locations on the Sun (PSs#1&5, both from AR 11748), which hold very similar properties including temperatures, EMs and thicknesses, in the context that they share similar magnetic configurations. Comparing the PSs observed during different phases of flares (PSs in the 2017 and 2023 flares), one can see that the sheet structure during the impulsive phase is generally cooler and thinner than that during the gradual phase of the same flare. We discuss the results in the following sections. §.§ Temperature Structure and Plasma Heating By performing DEM diagnosis on the flare plasmas and comparing PS with the nearby locations, we find that the PS feature mostly contains hot emissions from a narrow temperature range. These results suggest that the plasmas surrounding the flare current sheet exhibit an isothermal temperature with all plasmas being heated up into above 10 MK. <cit.> presented similar isothermal feature for the PS during the 2017 September 10 flare (PS#2 in our study) by examining the EM loci curves for background-excluded intensities of each AIA channels and EIS spectral lines (their Figure 8). Since the 2017 flare (PS#2) is the only case that the PS is observable in all six AIA EUV channels but most PSs are only evident in 131 Å, the EM-loci approach is difficult to apply to the other cases. On the other hand, it is hard in observations to extract the exact PS emissions by excluding the overlapping foreground and background along the line of sight. In our study, by carefully comparing the DEM distribution of PS with its nearby locations (we placed both parallel and perpendicular slits to the PSs to make comparison), we obtain the same results. Our analyses provide evidence of an isothermal temperature for all the PS cases under study. By examining the DEM distribution of PSs with height, we find that most of the PS structures show almost constant temperatures at different heights above the post-flare loops. This result suggests a long, uniform structure, with a balance of plasma heating and cooling at different heights along the PS. The 2017 September 10 flare (PS#2) is the exception in that its temperature declines smoothly with an increasing height, which agrees with previous results <cit.>. Considering the existence of significant conductive and radiative cooling processes, the isothermal or even increasing temperature in the plasma sheeets suggests additional plasma heating process occurring in the current sheet region, such as retracting magnetic fluxes associated with reconnection downflows <cit.>, global compression from reconnection inflows <cit.>, local heating from supra-arcade downflows within the plasma sheet <cit.>, or suppression of conductive cooling due to turbulence <cit.>. We also find in some cases the tip of cusp-shaped flare loops underneath contain hotter emissions than the PS (PSs #1, #4, #6, and #7), which could be contributed from the heating by a pair of slow-mode shocks attached to the reconnection region <cit.>. The different temperature distribution can help distinguish the PS feature from the cusp of flare loops, which are often mixed together in imaging observations. By isolating the PS emissions from foreground and background contributions, we can better characterize the temperature of the PS feature, using either weighted only by the hot DEM component or T_peak of the hot component; while the conventional mean temperature weighted by all DEM components can significantly underestimate the PS's temperature when the cool background emissions dominate (shown in the temperature maps). Since we obtain almost the same temperature range of hot emissions along the whole PS (except PS#2), it is reasonable to use the peak temperature of this hot component integrated along the whole length as the PS' temperature (in Table <ref>). All the temperatures we obtained is about log T_peak=7.0–7.1, which is very close to that of the Fe XXI emission line and agrees with AIA 131 Å observations. PS#2 exhibits decreasing temperatures with height, thus T_peak of all emissions along it gives a rough estimation that we can use to compare with other cases. The peak temperature of PS#2 is significantly higher than other cases and is similar to the temperature of Fe XXIV emission line which dominates in AIA 193 Åchannel during a flare. The PSs observed during the flare impulsive phase are slightly cooler than those during the gradual phase (the 2017 and 2023 flare cases), suggesting that the plasma heating in the current sheet region is more intense after the flare SXR peak, while during the impulsive phase a large portion of magnetic free energy is converted into kinetic energy. The temperature characteristics during different flare phases can be achieved in future detailed studies by examining the temporal evolution of PS temperatures <cit.>. §.§ Emission Measure and Plasma Density The bright PS feature observed in AIA demonstrates that it is higher in emission and density than the surroundings. We measured that the pure EMs from the hot PSs are in the range of 10^26-10^28 cm^-5 for most cases (4 out of 5 flares), which correspond to electron densities of 10^8-10^9 cm^-3 if assuming a line-of-sight depth of 100 Mm for these X-class flares (taking the 2013 May 13 flare as a reference, which is ∼120 Mm). The PS structures during the 2017 September 10 flare show emissions of one magnitude lower (higher) in the impulsive (gradual) phase than other cases, suggestive of varying plasma densities adhering to the reconnection current sheet, particularly unusually dense during the gradual phase. The PSs during the flare impulsive phase hold much less EMs than those during the gradual phase (Table <ref>; if we consider the superposed hot plasma cloud for PS#7 in Figure <ref>b), which can be resulted from either a shorter line-of-sight depth or a smaller electron density or both. The weak emission of PS during the impulsive eruption makes it more difficult to detect in remote sensing observations. The emissions of PSs during the flare gradual phase decrease exponentially with height by a factor of two in magnitude. By fitting the profiles using exponential functions, it is clearly seen that the EMs decrease over height much smaller than the scale height under a hydrostatic equilibrium (about 300 Mm for an isothermal temperature of =7.1). EMs of PSs #3,4,5 generally drop smoothly with height, showing similar trends with the AIA 131 intensities, which can contain significant contributions from the height-dependent background emissions; while the EM–height distributions of PSs #1,2 are different. We note that PSs #1#2 show an almost perfect edge-on view than the cases in other three flares (Figure <ref>), therefore their provides an better estimation for the hot current sheet plasams. The hot EMs of PSs #1&2 show clearly two-episode evolution with height, where the bottom part drops much steeper than the upper part. In particular, the PS#2 (2017 September 10 flare) has a scale height for the upper part of 70 Mm larger than the bottom, although the temperature of the former (=6.85–7.2) is significantly lower than the latter (=7.1–7.5). The small scale height in the bottom suggests extremely fast increase in plasma density near the bottom of flare current sheet, which can be caused by downward-moving reconnection outflows that decelerate faster and faster and pile up more toward the bottom, if considering the main reconnection site rises above the visible PS in AIA during the flare gradual phase. The emissions of PSs during the flare impulsive phase do not show smooth distribution with height but exhibit multiple distinct enhancements, in both AIA 131 Å intensity and EM and even in temperature (Figures <ref>, <ref>). This nonuniformity can be attributed to the existence of multiple plasmoids in the current sheet during fast magnetic reconnection, which is the right case occurring during flare impulsive phase. The plasmoids can have different sizes, experiencing coalescence or further tearing <cit.>. The most notable one in PS#7 shows a length of about 10 Mm (Figure <ref>), which is larger than the PS's thickness and can be the result of a combination of multiple sub-structures. The associated enhancement in temperature (Figure <ref>f,g) addresses an interesting aspect for future studies. §.§ Sheet Thickness and Magnetic Reconnection We measure the thickness of each sheet structure by taking samples at four different heights evenly distributed on the PS. The results are shown in Figure <ref>, where the height of each case is calculated with respect to the center of two footpoints of the post-flare loops below. The scatter plot of measured thickness shows no significant height association overall, suggestive of a uniform long sheet structure. The thinnest case is PS#6 observed during the impulsive phase of the 2017 September 10 flare (cyan symbols; can be thinner than 1 Mm), and the thickest case is PS#2 observed during the gradual phase of the same flare (yellow; up to ∼9 Mm). The thicknesses measured in the impulsive phase of the 2023 flare (PS#7) are smaller than those in the gradual phase (PS#3) as well. This result suggests a higher reconnection rate during flare impulsive phase when the current sheet is thinning into smaller scales which may result in multiple reconnection sites. For each individual case, the thickness measured in DEM (hollow symbols) is mostly smaller than that from AIA intensity (solid symbols), where the former gives a better constraint for the hot plasmas. The thickness is generally consistent with previous results measured in EUV and X-ray observations <cit.>. We note it is the apparent thickness of the hot plasmas surrounding the flare current sheet but not the electric current sheet itself, while the latter is expected to be even thinner. The length-to-thickness ratios of PS features (Table <ref>) are much larger than the threshold of tearing mode instability <cit.>, although the length in the study is an underestimation containing only the bottom, visible part of PSs in AIA (except for PS#6 which shows a full length). § SUMMARY We study the thermal properties of PS structures observed in AIA EUV channels, which form in the wake of flare eruptions connecting the post-flare loops and serve as the primary site for magnetic reconnection. The PS features contain only a narrow temperature range of hot plasmas at around =7.0-7.1, and the temperature remains constant at different heights along the PS. The PSs observed during the flare impulsive phase are generally cooler, thinner, and less dense than those during the gradual phase, indicative of different thermal properties in the context of high reconnection rate during the impulsive eruption. Our results show a long, uniform structure with an isothermal temperature, suggesting balanced heating and cooling processes along the sheet, particularly associated with additional plasma heating. The 2017 September 10 flare of the highest GOES-class shows an exceptional distribution that the PS's temperature increases toward the flare looptop during the gradual phase; while it is hotter, denser, thicker than other events, and it also extends high beyond the AIA's FOV. When a magnetic flux rope erupts into the high corona, the EUV sheet structure observed in AIA is only the bottom portion of a long current sheet connecting to the runaway eruption, thus it is also worthy of investigating how the properties of the complete structure are distributed. Such studies can benefit from observations of the extended corona for example by GOES/SUVI and future missions such as ECCCO. Although it is well-accepted that the reconnection current sheet is highly dynamic and fragmented with numerous different-scale substructures, its surrounding plasmas behave as a steady and uniform structure in terms of thermal properties in the macroscopic scale. To complement the thermal characteristics, spectroscopic diagnostics from hot coronal emission lines are particularly advantageous to understand the nonthermal and turbulent processes in the flare current sheet. We are grateful to the NASA SDO/AIA science team for the science data and analysis tools. TG acknowledges the support by contract SP02H1701R from Lockheed-Martin to SAO. KKR acknowledges support from NASA grant 80NSSC19K0853. aasjournal
http://arxiv.org/abs/2407.02359v1
20240702152504
The Poisson transport map
[ "Pablo López-Rivera", "Yair Shenfeld" ]
math.PR
[ "math.PR", "39B62, 60G55, 60H07" ]
Diffusion Models for Tabular Data Imputation and Synthetic Data Generation Ioannis Arapakis July 8, 2024 ========================================================================== § ABSTRACT We construct a transport map from Poisson point processes onto ultra-log-concave measures over the natural numbers, and show that this map is a contraction. Our approach overcomes the known obstacles to transferring functional inequalities using transport maps in discrete settings, and allows us to deduce a number of functional inequalities for ultra-log-concave measures. In particular, we provide the currently best known constant in modified logarithmic Sobolev inequalities for ultra-log-concave measures. § INTRODUCTION §.§ The Poisson transport map A classical way to establish functional inequalities for a given probability measure is to find a Lipschitz transport map from a source measure, for which the inequality is known, onto the target measure of interest. For example, suppose we want to prove a logarithmic Sobolev inequality for a probability measure over ^d. Suppose further that we can find an L-Lipschitz map :^n→^d, where n≥ d, which transports the standard Gaussian γ_n on ^n to . Since γ_n satisfies a logarithmic Sobolev inequality with constant 2, we can write, for g:^d→, _(g^2)=_γ_n(g^2∘)≤ 2∫_^n|∇(g∘)|^2γ_n≤ 2L^2∫_^n|(∇ g)∘)|^2γ_n=2L^2∫_^d|∇ g|^2. Thus, satisfies a logarithmic Sobolev inequality with constant 2L^2. If we wish to apply this method for discrete measures, we face a number of obstacles. Consider for instance the problem of constructing a Lipschitz transport map :→ between the Poisson measure (with intensity 1) _1 on , and another probability measure on . Since the domain and range of are both , the map cannot split the mass of _1 at any position in , which severely restricts the type of measures that can arise as the pushforward of _1 under . In addition, even if we can construct a Lipschitz transport map between _1 and , the lack of chain rule in the discrete setting hinders the argument in (<ref>). In this work we show that these obstacles can be overcome by transporting the Poisson point processes onto probability measures on . In the notation above, d=1 and n= ∞. In addition, we will show that in the setting considered in this work, the chain rule issue can be avoided. Let us describe informally our transport map. Fix a time >0 and >0, and consider a Poisson point process over [0,]× [0,]: * The numbers of points that fall in disjoint regions of [0,]× [0,] are independent. * Given B⊆ [0,]× [0,], the number of points that fall into B is distributed like a Poisson measure on with intensity (B). Now let :[0,]→ [0,] be a curve, and define the counting process (_t^)_t∈ [0,] by letting X_t^:=number of points in [0,t]× [0,] that fall below the curve , (Figure <ref>). Given a measure on , we can choose in a stochastic way so that _^∼. We call _^ the Poisson transport map as it transports the Poisson point process onto . The Poisson transport map can be viewed as the discrete analog of the Brownian transport map of Mikulincer and the second author <cit.>, which transports the Wiener measure on path space onto probability measures over ^d. The Brownian transport map is based on the Föllmer process, and, analogously, the Poisson transport map is based on the process (_t^)_t∈ [0,T], which is the discrete analogue of the Föllmer process. The process (_t^)_t∈ [0,T] was constructed by Klartag and Lehec <cit.> (specializing and elaborating on earlier work of Budhiraja, Dupuis, and Maroulas <cit.>), who used it to prove functional inequalities. In Section <ref>, we discuss the similarities and differences between the Brownian transport map and the Poisson transport map. §.§ Ultra-log-concave measures Just as in the continuous case, we cannot expect to have a Lipschitz transport map (with good constants) from onto any probability measure on , since the existence of such map will imply functional inequalities for . The classical result on the existence of Lipschitz transport maps in the continuous setting is due to Caffarelli <cit.>, who showed that if n=d, and =γ_d with f:^d→_≥ 0 log-concave, then there exits a 1-Lipschitz transport map between γ_d and . Closer to our setting, it was shown in <cit.> that the Brownian transport map is 1-Lipschitz when the target measure over ^d is of the form =γ_d, with f log-concave. In the discrete setting, the analogue of a measure being “more log-concave than the Gaussian" is that the measure is ultra-log-concave. To define this notion we recall that a positive function :→_> 0 is log-concave if ^2()≥(-1)(+1), ∀ ∈{1,2,…}. A probability measure on is ultra-log-concave if =_, where _ is the Poisson measure with intensity , and :→_>0 is a positive bounded log-concave function. (In Section <ref> we recall equivalent definitions of ultra-log-concave measures.) Ultra-log-concave measures form an important class of discrete probability measures as it possesses desirable properties such as closure under convolution <cit.>. They are also ubiquitous and show up in fields outside of probability such as combinatorics and convex geometry. We refer to the introduction of <cit.> for more information. Our first main result is that the Poisson transport map from the Poisson point process onto ultra-log-concave measures is 1-Lipschitz. We will formulate this condition in terms of the Malliavin derivative _(t,z) of _ (see Section <ref>). Fix a real number >0, and let =_ be an ultra-log-concave probability measure over . Let _ be the Poisson transport map from to . Then, -almost-surely, _(t,z)_∈{0,1} ∀  (t,z)∈ [0,]× [0,], (=(1)/(0)). The fact that _(t,z)_ is integer-valued follows from the definition of the Malliavin derivative _(t,z), and since _ is integer-valued. However, a priori, saying that _ is 1-Lipschitz could have implied _(t,z)_∈{-1,0,1}. Theorem <ref> shows that _(t,z)_≥ 0, which will be important to tackle the chain rule issue when transporting functional inequalities from to . §.§ Functional inequalities for ultra-log-concave measures The absence of the chain rule in the discrete setting complicates the study of functional inequalities for measures on . For example, Poisson measures _ over , the discrete analogues of Gaussians, do not satisfy logarithmic Sobolev inequalities. Rather, they satisfy modified logarithmic Sobolev inequalities, the strongest of which is due to Wu. To introduce Wu's inequality let be the discrete derivative of a function g:→, g():=g(+1)-g() for ∈. <cit.>. Let _ be the Poisson measure over with intensity . Then, for any positive g∈ L^2(,_), __(g)≤ __[Ψ(g, g)], where Ψ(u,v):=(u+v)log(u+v)-ulog u-(log u+1)v. In the continuous setting, as a consequence of the existence of 1-Lipschitz transport maps, measures which are more log-concave than Gaussians satisfy logarithmic Sobolev inequalities. Thus, in the discrete setting, we can expect ultra-log-concave measures to satisfy (<ref>). Indeed, Johnson showed the following: <cit.>. Let be an ultra-log-concave probability measure over . Then, for any positive g∈ L^2(,), _(g)≤(1)/(0) _[Ψ(g, g)], where Ψ(u,v):=(u+v)log(u+v)-ulog u-(log u+1)v. Note that (1)/(0)= when =_, so (<ref>) and (<ref>) agree in this case. Our second main result shows that we can in fact improve the constant in the modified logarithmic Sobolev inequalities for ultra-log-concave measures. Let be an ultra-log-concave probability measure over . Then, for any positive g∈ L^2(,), _(g)≤ _[Ψ(g, g)], where Ψ(u,v):=(u+v)log(u+v)-ulog u-(log u+1)v. It will follow from our work (Corollary <ref>) that ≤(1)/(0), so that (<ref>) improves on (<ref>). (Note however that (<ref>) holds, with constant 1/c, for the larger class of c-log-concave measures <cit.>.) Again, when =_, we have =. Theorem <ref> raises the question of what is the optimal constant in modified logarithmic Sobolev inequalities for ultra-log-concave measures. Fraser and Johnson <cit.> showed that the Poincaré inequality for ultra-log-concave measures holds with a constant at least as good as []:=_Z∼[Z]. On the other hand, it will follow from our work (Corollary <ref>) that []≤≤(1)/(0), which begs the question of whether (<ref>) holds with constant []. As evidence for an affirmative answer, it was shown by Aravinda, Marsiglietti, and Melbourne <cit.> that ultra-log-concave measures satisfy concentration inequalities with Poisson tail bounds. On the other hand, if the modified logarithmic Sobolev inequalities were to hold for ultra-log-concave measures with constant [], the result <cit.> could be deduced from the usual Herbst argument. Theorem <ref> is in fact a corollary of the following more general result, namely, the validity of Chafaï's Φ-Sobolev inequalities for ultra-log-concave measures; see Section <ref> for the precise definitions. Let be an ultra-log-concave probability measure over . Let ℐ⊆ be a closed interval, not necessarily bounded, and let Φ:ℐ→ be a smooth convex function. Suppose that the function {(u,v)∈^2:(u,u+v)∈ℐ×ℐ}∋ (u,v) ↦ Ψ(u,v):=Φ(u+v)-Φ(u)-Φ'(u)v is nonnegative and convex. Then, for any g∈ L^2(,), such that -a.s. g, g∈ℐ, ^Φ_(g)≤ _[Ψ(g, g)]. We conclude with the following transport-entropy inequality for ultra-log-concave measures; see Section <ref> for the precise definitions. Let =_ be an ultra-log-concave probability measure on , and let :=(1)/(0). Then, for any probability measure ν on which is absolutely continuous with respect to , and has a finite first moment, we have α_(W_1,|·|(ν,))≤ H(ν|), where α_c(r):=c[(1+r/c)log(1+r/c)-r/c]. The constant in (<ref>) can in fact be improved; cf. Remark <ref>. §.§ Organization of paper In Section <ref> we review some of the basics of ultra-log-concave measures, as well as the basics of the Poisson semigroup. Section <ref> provides the construction of the Poisson transport map, as well as some of its properties. In Section <ref> we prove our contraction theorem (Theorem <ref>). In addition, in Section <ref>, we compare and contrast the Brownian transport map and the Poisson transport map. Finally, in Section <ref> we prove our functional inequalities (Theorem <ref>, Theorem <ref>, and Theorem <ref>). §.§ Acknowledgments We are grateful to Joseph Lehec, Arnaud Marsiglietti, and Avelio Sepúlveda for their valuable comments. We would like to extend special thanks to Max Fathi for his many helpful remarks on this manuscript This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 945332. This work has also received support under the program “Investissement d'Avenir" launched by the French Government and implemented by ANR, with the reference “ANR-18-IdEx-0001" as part of its program “Emergence". This work received funding from the Agence Nationale de la Recherche (ANR) Grant ANR-23-CE40-0003 (Project CONVIVIALITY), as well as funding from the Institut Universitaire de France. This material is based upon work supported by the National Science Foundation under Award DMS-2331920. § ULTRA-LOG-CONCAVE MEASURES In this section we establish some of the properties of ultra-log-concave measures that will be used throughout the paper. We will denote :={0,1,2,…} the set of nonnegative integers, _+:={1,2,…}, and by _ the Poisson measure on with intensity >0, _():=e^-^/!, ∈. We say that a positive function :→_> 0 is log-concave if ^2()≥(-1)(+1), ∀ ∈_+. Equivalently, :→_> 0 is log-concave if the function _+∋↦()/(-1) is non-increasing. The following definition captures the intuition of a probability measure being more log-concave than a Poisson measure. A probability on is ultra-log-concave if there exists >0, and a positive bounded log-concave function , such that ()=()_() for all ∈. The intensity in Definition <ref> does not in fact play any role. It is readily verified from the definition that is ultra-log-concave, with respect to any intensity >0, if and only if, ^2()≥+1/(+1)(-1), ∀ ∈_+. In other words, once is more log-concave than _ for some , it is in fact more log-concave than _ for all . The Poisson semigroup (_t)_t≥ 0 will play an important role in our work. Given a function g:→ we define, for t≥ 0, _0g:=g, and _tg():=∑_n=0^∞g(+n)_t(n), ∀ ∈, t>0. The Poisson semigroup satisfies the identity ∂_t(_tg)()=(_tg)(), ∀ ∈, where h():=h(+1)-h(), for any h:→. Fix a time T>0. For future reference, given nonnegative :→, we set (t,):=log_-t() which satisfies ∂_t(t,)=-e^(t,)+1, ∀ t∈ [0,],  ∈. Our next result shows that the Poisson semigroup preserves log-concavity. While a number of proofs are available, our proof will mimic the proof of the fact that the heat semigroup preserves log-concavity. The latter is a consequence of the Prékopa-Leindler inequality, so we will use a discrete analogue of the Prékopa-Leindler inequality proven by Klartag and Lehec. :→_>0 be a log-concave function. Then, for any t≥ 0, _t is a log-concave function. Let V:=log. The log-concavity of implies that V is concave: 2V()≥ V(+1)+V(-1) ∀ ∈_+. Note that the condition (<ref>) is equivalent to V()+V(m)≥ V(⌊+m/2⌋)+V(⌈+m/2⌉) ∀ ,m∈. Our goal is to show that (_t e^V())^2≥_t e^V(+1)_t e^V(-1) ∀ ∈_+, which, by definition, is equivalent to (∑_n=0^∞e^V(+n)_t(n))^2≥(∑_n=0^∞e^V(+1+n)_t(n))(∑_n=0^∞e^V(-1+n)_t(n)). By <cit.>, (<ref>) implies (<ref>). Let :→_> 0 be a log-concave function. Fix >0 and ∈. The map [0,]∋ t↦_-t(+1)/_-t() is non-decreasing. Define θ: [0,]→ by θ(t):=_-t(+1)/_-t(), so we need to show that θ'(t)≥ 0. Indeed, by (<ref>), θ'(t)=-∂_t(_-t)(+1)/_-t()-_-t(+1)(-∂_t(_-t)())/(_-t())^2 =_-t(+1)((_-t)())/(_-t())^2-(_-t)(+1)/_-t() =1/(_-t())^2{_-t(+1)[_-t(+1)-_-t()]-_-t()[_-t(+2)-_-t(+1)]} =1/(_-t())^2{(_-t)^2(+1)-_-t()_-t(+2)}≥ 0, where the last inequality holds by Proposition <ref>. § THE POISSON TRANSPORT MAP In this section we construct the Poisson transport map. In Section <ref> we recall the construction of the canonical space for the Poisson point process, as well as the basics of the Malliavin calculus on this space. We will use <cit.> as our references. In Section <ref> we describe the process (_t^)_t∈ [0,] constructed by Klartag and Lehec <cit.>, which we interpret as a transport map from the Poisson measure on the canonical space onto probability measures over . §.§ The Poisson space Fix a real number >0. Let =_ be an ultra-log-concave probability measure on , where :→_> 0 is a positive bounded log-concave function. Set :=(1)/(0), and let :=[0,]× [0,]. We let be the sigma-algebra generated by the Borel sets of endowed with the product topology, and we let be the Lebesgue measure on . We define the Poisson space (,,) over (,,) by letting the probability space be :={ω:ω=∑_iδ_(t_i,z_i),  (t_i,z_i)∈ (at most countable)}, the sigma-algebra be :=σ(∋ω↦ω(B): B∈), and defining the probability measure by ∀  B∈, ∀ ∈, [{ω(B)=}]=_(B)(), ∀ n∈, ω(B_1),…,ω(B_n) are -independent if B_1,…, B_n∈ are disjoint. Given a measurable function G:→, we define the Malliavin derivative of G as the function G:×→ given by _(t,z) G(ω):= G(ω+δ_(t,z))-G(ω), ∀  (t,z)∈, ∀ ω∈. Of particular importance to us will be binary Malliavin derivatives, for which one has the following chain rule. Let G:→ be a measurable function such that _(t,z) G∈{0,1} for all (t,z)∈. Then, for any g:→, _(t,z)(g∘ G)= g(G)·_(t,z)G ∀  (t,z)∈. Fix ω∈ and (t,z)∈. If _(t,z)G(ω)=G(ω+δ_(t,z))-G(ω)=0, then _(t,z)(g∘ G(ω))=g(G(ω+δ_(t,z)))-g(G(ω))=0, since G(ω+δ_(t,z))=G(ω), which establishes (<ref>). Suppose then that _(t,z)G(ω)=1, so that G(ω+δ_(t,z))=G(ω)+1. Then, _(t,z)(g∘ G(ω)) =g(G(ω+δ_(t,z)))-g(G(ω))=g(G(ω)+1)-g(G(ω))= g(G(ω)) = g(G(ω))_(t,z) G(ω), where in the last equality we used _(t,z) G(ω)=1. §.§ The Poisson transport map Our construction of the Poisson transport map is based on the stochastic process used by Klartag and Lehec in <cit.> (whose origin can be found in Budhiraja, Dupuis, and Maroulas <cit.>). Let the notation and assumptions of Section <ref> hold. Given t∈ [0,] let _t be the sigma-algebra generated by the Borel sets of [0,t]× [0,] endowed with the product topology, and define the sigma-algebra _t on by _t:=σ(∋ω↦ω(B): B∈_t). We say that a stochastic process (_t)_t∈ [0,], where _t:→, is predictable if the function (t,ω)↦_t(ω) is measurable with respect to σ({(s,t]× B: s≤ t≤,  B∈_t}). Given a predictable nonnegative stochastic process :=(_t)_t∈ [0,], such that _t≤ for all t∈ [0,], we define the stochastic counting process (_t^)_t∈ [0,] by _t^(ω)=ω({(s,x)∈: s< t,  x≤_s(ω)}), (see Figure <ref>). Note that (_t^)_t∈ [0,] is a non-decreasing integer-valued left-continuous process such that _t^ is _t-measurable for all t∈ [0,], and hence (_t^)_t∈ [0,] is predictable. In addition, almost-surely, there are only finitely many jumps of (_t^)_t∈ [0,], each of which is of size 1. Thus, the process (_t^)_t∈ [0,] is a Poisson process with stochastic intensity . We will work with a specific stochastic intensity , namely, we will take the stochastic intensity ^* defined by the equation ^*_t(ω)=_-t(_t^^*(ω)(ω)+1)/_-t(_t^^*(ω)(ω))=e^(t,_t^^*(ω)(ω)), where we recall (<ref>). The existence of a solution to (<ref>) was given in <cit.> via a fixed-point argument. Note that by Proposition <ref>, (<ref>), and Corollary <ref>, _t^*≤_-t(1)/_-t(0)≤(1)/(0)=. To ease the notation, from here on, we will denote :=(_t)_t∈ [0,]:=(_t^^*)_t∈ [0,] and :=(_t)_t∈ [0,]:=(_t^*)_t∈ [0,]. The next lemma provides the time marginals of . Let be the process defined by (<ref>). For every t∈ [0,], the law of _t is (_-t)_t. Let h:→ be any function and fix ω∈. By construction, [0,]∋ t↦ h(_t(ω)) is -a.s. piecewise constant with jumps of size 1 at t_1<t_2<⋯, where ω=∑_iδ_(t_i,z_i). Hence, for each t∈ [0,], h(_t(ω))=h(0)+∫_0^t h(_s(ω))_s:=h(0)+∑_t_i≤ t h(_t_i(ω)). Taking expectation in (<ref>), and applying <cit.>, we get [ h(_t)]=h(0)+[∫_0^t h(_s)_s s]. Let _t be the law of _t. Differentiating (<ref>) in t, and using (<ref>), we can apply summation by parts to get ∑_j=0^∞h(j)∂_t_t(j)=∑_j=0^∞ h(j) e^(t,j)_t(j)=-h(0)e^(t,0)_t(0)-∑_j=0^∞h(j+1)[e^(t,j)_t(j)]. Equation (<ref>) holds for all h, so fix a non-zero ∈ and let h(j)=1_j= to get the discrete Fokker-Planck equation, ∂_t_t()=-[e^(t,-1)_t(-1)] ∀  k∈_+. To get an equation at =0, take h(j)=1_j=0 and use (<ref>) to deduce ∂_t_t(0)=-e^(t,0)_t(0). Using the convention e^(t,-1)=_t(-1)=0, we can combine (<ref>) and (<ref>) to get ∂_t_t()=-[e^(t,-1)_t(-1)] ∀  k∈. One can check that (<ref>) is uniquely solved by _t(k)=_-t(k)_t(k) ∀  k∈. An immediate corollary of Lemma <ref> is that _ is distributed as . We call the map _:→ the Poisson transport map as it transports to . §.§ Properties of the Poisson transport map and ultra-log-concave measures Let us prove a number of properties of the processes ,, which we will use later. Let be the process defined by (<ref>). Then is a -martingale, i.e., the process [0,]∋ t↦_-t(_t+1)/_-t(_t) is a -martingale. Further, the common mean of is _(1). Let h:[0,]×→ be such that the function [0,]∋ t↦ h(t,k) is continuous for all ∈. Then the function [0,]∋ t↦ h(t,_t) is piecewise absolutely-continuous function in t, so h(t,_t)=h(0,0)+∫_0^t h(s,_s)_s+∫_0^t∂_s h(s,_s) s. Take h(t,):=_-t(+1)/_-t(), and note that it satisfies the continuity condition. Then, using (<ref>), we get _-t(_t+1)/_-t(_t)-_(1)/_(0) =∫_0^t (e^(s,_s))_s +∫_0^t∂_s (e^(s,_s)) s =∫_0^t (1- ∂_s(s,_s))_s+ ∫_0^t∂_s (e^(s,_s)) s =-∫_0^t (∂_s(s,_s))_s+ ∫_0^t∂_s (e^(s,_s)) s. On the other hand, for every ∈, ∂_s (e^(s,))=e^(s,)∂_s(s,)=e^(s,)∂_s(s,), so by (<ref>), ∂_s (e^(s,_s))= (∂_s(s,_s)) _s. We conclude that _-t(_t+1)/_-t(_t)-_(1)/_(0)=-∫_0^t (∂_s(s,_s))[_s-_s s]. The process (_t-∫_0^t_s s)_t∈ [0,] is called the compensated process, and is a martingale. Hence, the process _-t(_t+1)/_-t(_t) is a stochastic integral with respect to a martingale, and hence a martingale <cit.>. To compute the common mean of note that since _∼ (cf. Lemma <ref>), _[_]=_[(_+1)/(_)]=∑_j=0^∞(j+1)/(j)(j)=∑_j=0^∞(j+1)_(j)=_(1). The fact that is a martingale allows us give a representation of the mean of in terms of the Poisson semigroup, as well as an upper bound. [](1)= _(1)(2)≤∫_0^_-t(1)/_-t(0) t(3)=-log(0)(4)=|log(0)|(5)≤(1)/(0). To prove identity (1), take h(j)=j and t= in (<ref>) to get [_]=[∫_0^_s s]=[_]= _(1), where we used Lemma <ref>. For the inequality (2), note that _f(0)=1, since =_ is a probability measure, and use Corollary <ref> to get _(1)/_f(0)≤_-t(1)/_-tf(0) for all t∈ [0,]. For the identity (3), use (<ref>) to compute ∫_0^_-t(1)/_-tf(0) t=∫_0^ e^(t,0) t=∫_0^ [1-∂_t(t,0)] t=-[ (,0)- (0,0)]. The result follows since (,0)=log (0), and (0,0)=log_(0)=0 (because _f(0)=1 as =_ is a probability measure). The identity (4) follows from =_. Finally, the inequality (5) holds since, by Corollary <ref>, _-t(1)/_-tf(0)≤(1)/(0) so, by (3)-(4), |log(0)|=∫_0^_-t(1)/_-t(0) t≤(1)/(0)=(1)/(0). § CONTRACTION OF THE POISSON TRANSPORT MAP The main result of this section is that the Poisson transport map is a contraction when the target measures are ultra-log-concave. This result will follow from the following more general theorem, showing that the Malliavin derivative of is binary and nonnegative. Fix a real number >0 and let =_ be an ultra-log-concave probability measure over . Let be the process defined by (<ref>). Then, -almost-surely, for every s∈ [0,], _(t,z)_s∈{0,1} ∀  (t,z)∈. An immediate corollary of Theorem <ref> is that the Poisson transport map is a contraction, thus proving Theorem <ref>. Fix a real number >0 and let =_ be an ultra-log-concave probability measure over . Let _ be the Poisson transport map from to . Then, -almost-surely, _(t,z)_∈{0,1} ∀  (t,z)∈. Let us turn to the proof of Theorem <ref>. Fix (t,z)∈ and ω∈. Then -a.s., there exists n∈ such that ω=∑_i=1^nδ_(t_i,z_i) for (t_i,z_i)∈, with i∈ [n]:={1,…, n}, 0<t_1<⋯<t_n<, and t≠ t_i for all i∈ [n]. Fix s∈ [0,]. We need to show that _(t,z)_s(ω)=_s(ω+δ_(t,z))-_s(ω)∈{0,1}. Let us first explain the intuition why (<ref>) holds, and then turn to its rigorous verification. There are three cases to consider. The first two are easy, and the third one is the interesting one. * Case 1. s≤ t: Then the contribution of the atom (t,z) is not captured by either _s(ω+δ_(t,z)) or _s(ω), so both processes behave identically, and hence _(t,z)_s(ω)=0. * Case 2. t<s and z lies above the curve (ω+δ_(t,z)): Then the atom (t,z) is not counted by the process (ω+δ_(t,z)), so the processes (ω+δ_(t,z)) and (ω) are equal, and hence _(t,z)_s(ω)=0. * Case 3. The interesting case is t<s and z lies below the curve (ω+δ_(t,z)), so the processes (ω+δ_(t,z)) and (ω) can in fact differ. Our goal is show that when the two processes differ, (ω+δ_(t,z)) is always greater than (ω), but by no more than 1. The key to prove this is to use the log-concavity of . Using the explicit expression of (<ref>), we can reason about the relation between (ω+δ_(t,z)) and (ω), and hence about the relation between (ω+δ_(t,z)) and (ω). Let us now turn to the actual proof of the theorem. Case 1. s≤ t: We will show _(t,z)_s(ω)=_s(ω+δ_(t,z))-_s(ω)=0. From the definition of (ω+δ_(t,z)), we know that the atom (t,z) is not counted by (ω+δ_(t,z)). So to verify (<ref>) it suffices to show that each atom (t_i,z_i) is either counted by both (ω+δ_(t,z)) and (ω), or by neither. If t_i≥ s for all i∈ [n], then (<ref>) holds since both (ω+δ_(t,z)) and (ω) start at 0, and neither adds any atom by time s. If there exists i∈ [n] such that t_i<s, let us denote i_max:=max{i∈ [n]:t_i<s}. Since the processes are left-continuous, starting at 0, _t_1(ω+δ_(t,z))=_t_1(ω)=0. Hence, by (<ref>), _t_1(ω)=_-t_1(_t_1(ω)+1)/_-t_1(_t_1(ω))=_-t_1(_t_1(ω+δ_(t,z))+1)/_-t_1(_t_1(ω+δ_(t,z)))=_t_1(ω+δ_(t,z)). It follows that z_1≤_t_1(ω+δ_(t,z)) ⟺ z_1≤_t_1(ω). Hence, for each r∈(t_1,t_2∧ s], (if n=1 then for each r∈ (t_1,s]), _r(ω+δ_(t,z))=_r(ω). If i_max=1, we are done. Otherwise, if i_max≥ 2, we can repeat the above argument inductively for i∈{2,…,i_max} to conclude that (<ref>) holds. Case 2. t<s and z>_t(ω+δ_(t,z)): We will show _(t,z)_s(ω)=_s(ω+δ_(t,z))-_s(ω)=0. The argument of Case 1 shows that _t(ω+δ_(t,z))=_t(ω). Since z>_t(ω+δ_(t,z)), the atom (t,z) is not counted by (ω+δ_(t,z)). It remains to analyze the atoms (t_i,z_i) for i∈ [n]. If there exist no t_i such that t<t_i<s, then it is clear that _s(ω+δ_(t,z))=_s(ω), so (<ref>) holds. Suppose then that there exist t_i such that t<t_i<s, and let i_min:=min{i∈ [n]:t<t_i<s}. Similar to Case 1, for r∈ (t,t_i_min] we have _r(ω+δ_(t,z))=_r(ω). In particular, _t_i_min(ω+δ_(t,z))=_t_i_min(ω) so, as in Case 1, z_i_min≤_t_i_min(ω+δ_(t,z)) ⟺ z_i_min≤_t_i_min(ω). Hence, for each r∈ (t_i_min,t_i_min+1∧ s], (if i_min=n then for r∈ (t_i_min, s]), _r(ω+δ_(t,z))=_r(ω). We may repeat the argument above inductively for all i∈ [n] satisfying t<t_i<s to conclude that _s(ω+δ_(t,z))=_s(ω), so (<ref>) holds. Case 3. t<s and z≤_t(ω+δ_(t,z)): In contrast to Cases 1 and 2 we will show that _(t,z)_s(ω)=_s(ω+δ_(t,z))-_s(ω)∈{0,1}. Again, the argument of Case 1 shows that _t(ω+δ_(t,z))=_t(ω). In contrast to Case 2, since z≤_t(ω+δ_(t,z)), the atom (t,z) is counted by (ω+δ_(t,z)). Let us analyze the possible values of _s(ω+δ_(t,z)) and _s(ω). If there exist no t_i such that t<t_i<s, then _s(ω+δ_(t,z))=_t(ω+δ_(t,z))+1 and _s(ω)=_t(ω), so _(t,z)_s(ω)=1, and hence (<ref>) holds. Suppose then that there exists t_i such that t<t_i<s, and as in Case 2, let i_min:=min{i∈ [n]:t<t_i<s}. For r∈ (t,t_i_min], we have _r(ω+δ_(t,z))=_t(ω)+1 and _r(ω)=_t(ω). In particular, _t_i_min(ω+δ_(t,z))=_t_i_min(ω)+1, so by Proposition <ref>, and (<ref>), _t_i_min(ω+δ_(t,z)) =_-t_i_min(_t_i_min(ω+δ_(t,z))+1)/_-t_i_min(_t_i_min(ω+δ_(t,z)))=_-t_i_min(_t_i_min(ω)+2)/_-t_i_min(_t_i_min(ω)+1) ≤_-t_i_min(_t_i_min(ω)+1)/_-t_i_min(_t_i_min(ω))=_t_i_min(ω). Let us record then the one-direction analogue of (<ref>), z_i_min≤_t_i_min(ω+δ_(t,z)) ⟹ z_i_min≤_t_i_min(ω). We now have a three sub-cases to consider. Case 3.1. z_i_min≤_t_i_min(ω+δ_(t,z)): Applying (<ref>) we can deduce that for each r∈(t_i_min,t_i_min+1∧ s], (if i_min=n then for each r∈(t_i_min, s]), _(t,z)_r(ω)=1, since both (ω) and (ω+δ_(t,z)) count (t_i_min,z_i_min). Case 3.2. z_i_min>_t_i_min(ω+δ_(t,z)) and z_i_min>_t_i_min(ω): For each r∈(t_i_min,t_i_min+1∧ s], (if i_min=n then for each r∈(t_i_min, s]), we have _(t,z)_r(ω)=1, since both (ω) and (ω+δ_(t,z)) did not count (t_i_min,z_i_min). Case 3.3. z_i_min>_t_i_min(ω+δ_(t,z)) and z_i_min≤_t_i_min(ω): For each r∈(t_i_min,t_i_min+1∧ s], (if i_min=n then for each r∈(t_i_min, s]), we have _(t,z)_r(ω)=0, since (ω) counted (t_i_min,z_i_min), but (ω+δ_(t,z)) did not. If there exists no t_i, for i>i_min, such that t<t_i<s, then Cases 3.1-3.3 verify (<ref>). Suppose then that there exist i>i_min such that t<t_i<s. We will proceed inductively. From Cases 3.1-3.3 we have that _(t,z)_t_i_min+1(ω)∈{0,1}. If _(t,z)_t_i_min+1(ω)=1, then arguing as in (<ref>), we have _t_i_min+1(ω+δ_(t,z))≤_t_i_min+1(ω). We now repeat Cases 3.1-3.3, replacing i_min by i_min+1. If _(t,z)_t_i_min+1(ω)=0, then _t_i_min+1(ω+δ_(t,z))= _t_i_min+1(ω), so (<ref>) holds, and again we repeat Cases 3.1-3.3, replacing i_min by i_min+1. Continuing in this manner we deduce (<ref>). The proof of Theorem <ref> yields the following necessary condition for the Malliavin derivative being 1. Fix (t,z)∈ and ω∈. Then -a.s., given s∈ [0,], if _(t,z)_s(ω)=1, we must have t<s and z≤_t(ω+δ_(t,z))= _t(ω). The conditions t<s and z≤_t(ω+δ_(t,z)) hold because the proof of Theorem <ref> showed that _(t,z)_s(ω)=0 in Cases 1-2. The condition _t(ω+δ_(t,z))= _t(ω) holds since in Case 3 we have shown _t(ω+δ_(t,z))=_t(ω), so the result follows from (<ref>). §.§ The Brownian transport map vs. the Poisson transport map Let us elaborate on the similarities and dissimilarities between the Brownian transport map <cit.> and the Poisson transport map. For simplicity, let us take =1. We begin with a sketch of the Brownian transport map. Let be a probability measure on ^d of the form =γ_d. Denote by (_t)_t∈ [0,1] the heat semigroup on ^d, and consider the stochastic differential equation, _t=∇log_1-t(_t) t+ B_t, _0=0, where (B_t)_t∈ [0,1] is a standard Brownian motion in ^d. The process :=(_t)_t∈ [0,1] is known as the Föllmer process, and can be seen as Brownian motion conditioned to be distributed like at time 1. Alternatively, the process is a solution to an entropy minimization problem over the Wiener space. In <cit.>, _1 is called the Brownian transport map, as it transports the Wiener measure (the law of (B_t)_t∈ [0,1]) onto . Now suppose that =γ_d is such that :^d→_≥ 0 is log-concave. It was shown in <cit.> that, in such setting, the Brownian transport map _1 is 1-Lipschitz, in the sense that the Malliavin derivative of _1 is bounded in absolute value by 1. The proof of this result proceeds by differentiating (<ref>) (with ) to get <cit.>, ∂_s_s=∇^2log_1-s(_s)_s. Hence, to show that _1 is 1-Lipschitz, i.e., |_1|≤ 1, it suffices to control ∇^2log_1-s(_s), and then use Grönwall's inequality. In particular, when is log-concave, _1-s is also log-concave (consequence of the Prékopa-Leindler inequality), i.e., ∇^2log_1-s(_s)≤ 0, so (<ref>) and Grönwall's inequality yield |_1|≤ 1. The analogue in the discrete setting of the Föllmer process (<ref>) is the process defined in (<ref>). Indeed, it was shown in <cit.> that the process is the solution to the corresponding entropy minimization problem on the Poisson space. Unlike the continuous setting, here we do not have an analogue of (<ref>), but we do have an analogue of (<ref>). The process _s= _1-s(_s+1)/_1-s(_s) plays the role of ∇log_1-s(_s), and the next result is the analogue of (<ref>). For every s∈ [0,1], -almost-surely, _(t,z)_s≤ 0 ∀  (t,z)∈. Fix ω∈. By definition, _(t,z)_s(ω)=_s(ω+δ_(t,z))-_s(ω)=_1-s(_s(ω+δ_(t,z))+1)/_1-s(_s(ω+δ_(t,z)))-_1-s(_s(ω)+1)/_1-s(_s(ω)). By Theorem <ref>, _s(ω+δ_(t,z))∈{_s(ω),_s(ω)+1}. If _s(ω+δ_(t,z))=_s(ω), then _(t,z)_s(ω)=0. If _s(ω+δ_(t,z))=_s(ω)+1, then _(t,z)_s(ω)=_1-s(_s(ω)+2)/_1-s(_s(ω)+1)-_1-s(_s(ω)+1)/_1-s(_s(ω))≤ 0, where the inequality holds by Proposition <ref> and (<ref>). Let us conclude with a few remarks on some other differences between the Brownian and Poisson transport maps. * In the Brownian transport map setting, the source measure is always the Wiener measure on the Wiener space C([0,1]) of continuous functions [0,1]→, independent of the target measure . In contrast, the transported Poisson measure depends on , because the space =[0,]× [0,] depends on via . This difference is not material since the functional inequalities satisfied by do not depend on . * The fact that the Brownian transport map is 1-Lipschitz, when is more log-concave than the Gaussian, means that the functional inequalities which hold for the Gaussian also hold for with the same constants. In contrast, the constants in the functional inequalities for ultra-log-concave measures =_, obtained from the Poisson transport map, are different from those satisfied by _. This is not a deficiency of the Poisson transport map, but rather a manifestation of the discrete nature of the probability measures under consideration. § FUNCTIONAL INEQUALITIES In this section we show how Corollary <ref> can be used to deduce functional inequalities for ultra-log-concave measures. In particular, the results of this section verify Theorem <ref>, Theorem <ref>, and Theorem <ref>. The proofs of all of the results below proceed by using an appropriate functional inequality for (cf. Section <ref>), and then, using Corollary <ref>, transporting these inequalities to ultra-log-concave measures. §.§ Φ-Sobolev inequalities In this section we prove both Theorem <ref> and Theorem <ref>. Let ℐ⊆ be a closed interval, not necessarily bounded, and let Φ:ℐ→ be a smooth convex function. Let (E,ℰ,Q) be a probability Borel space. The Φ-entropy functional ^Φ_Q is defined on the set of Q-integrable functions G:(E,ℰ)→ (ℐ,ℬ(ℐ)), where ℬ(ℐ) stands for the Borel sigma-algebra of ℐ, by ^Φ_Q(G):=∫_EΦ(G) Q-Φ(∫_E G Q). As shown by Chafaï, the Poisson measure satisfies Φ-Sobolev inequalities: <cit.>. Let ℐ⊆ be a closed interval, not necessarily bounded, and let Φ:ℐ→ be a smooth convex function. Suppose that the function {(u,v)∈^2:(u,u+v)∈ℐ×ℐ}∋ (u,v) ↦ Ψ(u,v):=Φ(u+v)-Φ(u)-Φ'(u)v is nonnegative and convex. Then, for any G∈ L^2(,), such that -a.s. G, G∈ℐ, ^Φ_(G)≤_[∫_Ψ(G,_(t,z)G) t z]. Let us now transport the inequality (<ref>) to ultra-log-concave measures, using the Poisson transport map, thus proving Theorem <ref>. Let be an ultra-log-concave probability measure over . Let ℐ⊆ be a closed interval, not necessarily bounded, and let Φ:ℐ→ be a smooth convex function. Suppose that the function {(u,v)∈^2:(u,u+v)∈ℐ×ℐ}∋ (u,v) ↦ Ψ(u,v):=Φ(u+v)-Φ(u)-Φ'(u)v is nonnegative and convex. Then, for any g∈ L^2(,), such that -a.s. g, g∈ℐ, ^Φ_(g)≤ _[Ψ(g, g)]. Define G∈ L^2(,) by G(ω):=g(_T(ω)), and apply (<ref>) to get ^Φ_(g)=^Φ_(G) ≤_[∫_Ψ(G,_(t,z)G) t z] =_[∫_Ψ(g∘_T,( g∘_)·_(t,z)_) t z], where the last equality holds by Corollary <ref> and Lemma <ref>. Since _(t,z)_∈{0,1} by Corollary <ref>, we have that -a.s., Ψ(g∘_T,( g∘_)·_(t,z)_)=Ψ(g∘_T,( g∘_))1_{_(t,z)_=1}. On the other hand, by Corollary <ref>, we have, -a.s., 1_{_(t,z)_=1}≤ 1_{z≤_t}. Since Ψ is nonnegative, we conclude from (<ref>) that Ψ(g∘_T,( g∘_)·_(t,z)_)≤Ψ(g∘_T,( g∘_)) 1_{z≤_t}. It follows from (<ref>) and (<ref>) that ^Φ_(g) ≤_[∫_Ψ(g∘_T, g∘_)1_{z≤_t} t z] =_[Ψ(g∘_T, g∘_)∫_1_{z≤_t} t z] =_[Ψ(g∘_T, g∘_)∫_0^_t t]. By (<ref>), ∫_0^_t t=∫_0^_-t(_t+1)/_-tf(_t) t. On the other hand, _-t(_t+1)/_-tf(_t)≤_-t(1)/_-tf(0) by Proposition <ref> and (<ref>). The proof is complete by Corollary <ref>(3). Taking Φ(r)=rlog r we deduce a modified logarithmic Sobolev inequality, thus proving Theorem <ref>. Let be an ultra-log-concave probability measure over . Then, for any positive g∈ L^2(,), _(g)≤ _[Ψ(g, g)], where Ψ(u,v):=(u+v)log(u+v)-ulog u-(log u+1)v. §.§ Transport-entropy inequalities In this section we prove Theorem <ref>. We fix an ultra-log-concave probability measure =_ on , and recall the definition of the associated Poisson space from Section <ref>. The starting point is a transport-entropy inequality for the Poisson measure by Ma, Shen, Wang, and Wu (a special case of their more general result), which requires the following definitions. Let d be the total variation distance on given by d(ω,ω'):=|ω()-ω'()| <cit.>. Given two probability measures Q,P on (,), with finite first moments, let the Wasserstein 1-distance between them be given by W_1,d(Q,P):=inf_Π∫_×d(ω,ω')Π(ω,ω'), where the infimum is taken over all couplings Π of (Q,P). If Q is absolutely continuous with respect to P, let the relative entropy between them be H(Q|P):=∫_log( Q/ P) Q. Finally, given c>0, let α_c(r):=c[(1+r/c)log(1+r/c)-r/c]. <cit.>. For any probability measure Q on (,) which is absolutely continuous with respect to , and has a finite first moment, we have α_(W_1,d(Q,))≤ H(Q|), where =(1)/(0). Let us now transport the inequality (<ref>), thus proving Theorem <ref>. To do so, we define the Wasserstein 1-distance between two probability measures ν,ρ on , with finite first moments, by W_1,|·|(ν,ρ):=inf_Π∫_×|x-y|Π(x,y), where the infimum is taken over all couplings Π of (ν,ρ). Let =_ be an ultra-log-concave probability measure on with =(1)/(0). Then, for any probability measure ν on which is absolutely continuous with respect to , and has a finite first moment, we have α_(W_1,|·|(ν,))≤ H(ν|). We follow the proof of <cit.>. Fix a probability measure ν on which is absolutely continuous with respect to , and has a finite first moment. By <cit.>, H(ν|)=inf_Q{H(Q|): Q∘_^-1=ν}. Hence, by (<ref>), it suffices to show that α_(W_1,|·|(ν,))≤inf_Q{α_(W_1,d(Q,)):Q∘_^-1=ν}. Since α_ is monotonic, (<ref>) is equivalent to W_1,|·|(ν,)≤inf_Q{W_1,d(Q,):Q∘_^-1=ν}. To establish (<ref>), note that by Corollary <ref>, and <cit.>, we have that _:(,d)→ (,|·|) is 1-Lipschitz. Fix Q such that Q∘_^-1=ν, and let Π be the coupling attaining the minimum in the definition of W_1,d(Q,). Note that Π∘_^-1 is a coupling of (Q∘_^-1,∘_^-1)=(ν,). Hence, W_1,|·|(ν,) ≤∫_× |_(ω)-_(ω')| Π(ω,ω')≤∫_× d(ω,ω') Π(ω,ω')=W_1,d(Q,), which establishes (<ref>) by taking the infimum over Q. It is possible in principle to improve the constant to as follows. Instead of working with =[0,]× [0,], we can work with :={(t,z)∈ [0,]×_≥ 0: z≤_-t(1)/_-t(0)}, since _t≤_-t(1)/_-t(0) -a.s. (cf. (<ref>)). Then the volume of is ∫_0^_-t(1)/_-t(0)=, where the equality holds by Corollary <ref>(3). This approach, however, requires a modification of the formulation we used in this paper, with minor benefits, so we do not pursue this improvement. amsplain0
http://arxiv.org/abs/2407.02676v2
20240702212914
Covariate-dependent hierarchical Dirichlet process
[ "Huizi Zhang", "Sara Wade", "Natalia Bochkina" ]
stat.ME
[ "stat.ME" ]
Covariate-dependent hierarchical Dirichlet process Huizi Zhang H.Zhang-144@sms.ed.ac.uk School of Mathematics University of Edinburgh Edinburgh, EH9 3FD, UK Sara Wade sara.wade@ed.ac.uk School of Mathematics University of Edinburgh Edinburgh, EH9 3FD, UK Natalia Bochkina N.Bochkina@ed.ac.uk School of Mathematics University of Edinburgh Edinburgh, EH9 3FD, UK July 8, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The intricacies inherent in contemporary real datasets demand more advanced statistical models to effectively address complex challenges. In this article we delve into problems related to identifying clusters across related groups, when additional covariate information is available. We formulate a novel Bayesian nonparametric approach based on mixture models, integrating ideas from the hierarchical Dirichlet process and “single-atoms" dependent Dirichlet process. The proposed method exhibits exceptional generality and flexibility, accommodating both continuous and discrete covariates through the utilization of appropriate kernel functions. We construct a robust and efficient Markov chain Monte Carlo (MCMC) algorithm involving data augmentation to tackle the intractable normalized weights. The versatility of the proposed model extends our capability to discern the relationship between covariates and clusters. Through testing on both simulated and real-world datasets, our model demonstrates its capacity to identify meaningful clusters across groups, providing valuable insights for a spectrum of applications. clustering, hierarchical model, dependent mixture model, nonparametric Bayesian statistics, Markov chain Monte Carlo § INTRODUCTION The escalating volume and complexity of real datasets have posed formidable challenges in statistical analysis. Unstructured data is widespread across various domains, including biological applications such as single-cell RNA sequencing (scRNA-seq), and information retrieval scenarios dealing with raw documents from multiple corpora. Typically, a recurring and important objective in handling unstructured data is to uncover its inherent structure through clustering observations into groups, also known as unsupervised learning. Mixture models are widely adopted approaches for clustering, due to their interpretability and flexibility. These generative models usually assume a common distribution family for each data point, and observations from the same cluster share the same parameters in the likelihood. In contrast to parametric mixture models, Bayesian nonparametric (BNP) methods are gaining popularity as BNP allows for an infinite-dimensional parameter space, requiring fewer assumptions, thereby retaining greater flexibility. The Dirichlet process (DP) by <cit.> is a typical example of BNP priors, allowing the number of clusters to grow as the data size increases. Numerous DP-based extensions have been proposed to address challenges in various contexts. The dependent Dirichlet process (DDP) of <cit.> facilitates the incorporation of exogenous covariates into clustering. The hierarchical Dirichlet process (HDP) by <cit.> concentrates exclusively on categorical covariates, enabling clustering across related groups. There has been recent research on leveraging predictors in HDP. <cit.> develops supervised HDP for topic modelling that can predict continuous or categorical response associated with each document (group) using generalized linear models. The hierarchical Dirichlet scaling process <cit.> considers documents with observed labels, and topic proportions are modelled dependent on the distance between the latent locations of labels and topics. <cit.> extends HDP to dynamic HDP for time-evolving data, assuming that adjacent groups collected closer in time are more likely to share components, and new components can be added over time. <cit.> incorporates spatial/temporal information using a kernel logistic regression, based on the stick-breaking representation <cit.>. Similarly, <cit.> develops covariate-augmented nonparametric latent Dirichlet allocation, where covariates are included through a logistic stick-breaking process. <cit.> proposes hierarchical dependent Dirichlet process (HDDP), combining HDP and “single-weights” DDP. In this article, we propose a covariate-dependent hierarchical Dirichlet process (C-HDP), combining both hierarchical Dirichlet process and the “single-atoms” dependent Dirichlet process. External covariates can be flexibly incorporated to facilitate clustering across groups through the use of various kernel functions. The proposed method holds utility in various settings. For instance, biological researchers may be keen to understand how cellular latent time, an indicator of cell position in the developmental path, influences the identification of cell sub-populations from different experiment conditions (Figure <ref>). For efficient inference, we construct a novel Markov Chain Monte Carlo (MCMC) algorithm that employs latent variables to cope with the intractable normalized weights. We demonstrate that our model can capture the relationship between clusters and covariates, and identify reasonable clusters across groups in both simulated and real data. The paper is organized as follows. We commence by providing a review of DP and its extensions DDP and HDP in Section <ref>. Section <ref> outlines the definition of the covariate-dependent HDP, examples of common likelihood and kernel functions. The details of inference are presented in Section <ref>. In Section <ref>, we showcase the application of C-HDP to a real dataset on scRNA-seq. Section <ref> concludes the paper and discusses potential future directions. The complete MCMC algorithm and simulation study are provided in the Appendix. § REVIEW To formally define our proposed model, we first provide introduction to existing standard approaches in the Bayesian nonparametric literature. Dirichlet process A random probability measure P on a space Θ is said to follow a DP prior with baseline probability measure P_0 and concentration parameter α, denoted by P ∼(α, P_0), if for any finite partition { A_1,…,A_k} of Θ, (P(A_1),P(A_2),…,P(A_k)) ∼(α P_0(A_1), α P_0(A_2),…, α P_0(A_k)), where (α') denotes the Dirichlet distribution with concentration parameters α'. An important property of DP is the discrete nature of P. A random probability measure P ∼(α, P_0) can be written as a combination of weights and point mass, P(·)=∑_j=1^∞w_j δ_θ_j^*(·), where w_j are probability weights and δ_θ_j^* is the Dirac measure at θ_j^*. <cit.> derives the stick-breaking construction as follows. P(·) =∑_j=1^∞w_j δ_θ_j^*(·), w_1=v_1, w_j=v_j∏_l<j(1- v_l) for j>1, v_j i.i.d∼Β(1,α), where θ_j^* i.i.d∼P_0, and v_j is independent of θ_j^*. It can be shown that ∑_j=1^∞w_j=1 almost surely <cit.>. The discreteness of P implies there are ties among θ_1, …, θ_n, making DP a suitable prior in Bayesian mixture modelling. Moreover, it does not require the specification of the number of clusters k. Instead, k is data-driven and increases as the sample size gets large. Dependent Dirichlet process The random probability measure P_x is constructed following a similar stick-breaking representation to Eq (<ref>), with v_j(x), θ_j^*(x) and α(x). DP can be considered as a special case where weights and atoms are independent of the covariate. A common simplified DDP model is the “single-weights” DDP where the weights do not depend on x and are the same as in DP, whereas the atoms depend on the covariates, P_x(·)=∑_j=1^∞w_j δ_θ_j^*(x)(·). Similarly there is a “single-atoms” DDP for covariate-dependent weights only, P_x(·)=∑_j=1^∞w_j(x) δ_θ_j^*(·). Hierarchical Dirichlet process Hierarchical Dirichlet process <cit.> focuses exclusively on categorical covariates, allowing for clustering across related groups. For observations y_i,d for the i-th subject in the d-th dataset, and a density f parameterised by subject-specific parameter θ_i,d, HDP assumes the following hierarchical structure y_i,d|θ_i,dind∼ f(y_i,d|θ_i,d) , θ_i,d|P_d i.i.d∼ P_d, P_d|α,P i.i.d∼(α, P) , P | α_0, P_0 ∼(α_0, P_0), where another layer of DP prior is given to the global random probability measure P. Shared clusters are ensured by noticing that P is discrete with atoms given by the base measure P_0, and these atoms are also shared in P_d for each d. <cit.> provide the stick-breaking construction P_d=∑_j=1^∞p_j,dδ_θ_j^* , P =∑_j=1^∞p_jδ_θ_j^*, p_j,d=v_j,d∏_l<j(1-v_l,d) , v_j,d∼Β( α p_j, α( 1-∑_l=1^jp_l) ), p_j=v_j∏_l<j(1-v_l) , v_j Β(1, α_0), where θ_j^* i.i.d∼P_0 are cluster-specific parameters. This construction also shows that different measures P_d share the same atoms θ_j^* but with different cluster proportions p_j,d. <cit.> show that HDP can be approximated by the finite-dimensional HDP as P_d^J=∑_j=1^Jp_j,d^J δ_θ_j^*, P^J =∑_j=1^Jp_j^Jδ_θ_j^*, p^J_1,d,…, p^J_J,d |α, p^J_1,…, p^J_J ∼( α p^J_1,…, α p^J_J), p^J_1,…, p^J_J|α_0 ∼( α_0/J,…, α_0/J), and J is the truncation level. By introducing the latent allocation variables z_i,d, the finite HDP mixture model is y_i,d|z_i,d, {θ_j^*} _j=1^J ind∼ f(y_i,d|θ_z_i,d^*), z_i,d| p^J_1,d,…, p^J_J,d ind∼(p^J_1,d,…, p^J_J,d), where θ_j^* i.i.d∼ P_0, and (p^J_1,d,…, p^J_J,d) denotes the categorical distribution. § METHODOLOGY Real-world datasets often encompass various types of covariates for statistical modelling. In order to achieve clustering with information from external predictors, we construct a covariate-dependent HDP that borrows ideas from the “single-atoms” DDP and HDP. In HDP, it is shown that θ_i,d=θ_j^* with probability p_j,d, i.e., the probability of belonging to the j-th cluster is the same for all observations in dataset d. We propose to introduce dependence by defining the probability as a function of the covariate x_i,d, leading to P_d(x_i,d)=∑_j=1^∞p_j,d(x_i,d) δ_θ_j^*, in place of P_d in Eq (<ref>). Specifically, the covariate-dependent probabilities are defined as p_j,d(x_i,d)=p_j,dK(x_i,d|_j,d^*)/∑_k=1^∞ p_k,dK(x_i,d|_k,d^*), where p_j,d is the same as the stick-breaking construction (Eq (<ref>)), and K(x_i,d|_j,d^*) is a kernel function relying on kernel parameters _j,d^* and satisfies 0<K(x_i,d|_j,d^*)<1. This formulation of covariate-dependent probabilities is motivated by <cit.> where the dependent weights are provided for the normalized gamma process representation of the DP. A similar construction is given in <cit.> in the context of nonparametric regression problem, where p_j,d can be considered as the probability that an observation belongs to cluster j regardless of the covariate value, and the kernel represents how likely an observation from cluster j will take the value x_i,d. We remark that our C-HDP prior differs from the hierarchical dependent Dirichlet process prior in <cit.> which combines the “single-weights” DDP and HDP instead. In particular, the covariate x is introduced in the global measure P instead of dataset-specific DPs P_d, and therefore the influence of the covariate is the same across datasets, whilst the effect is allowed to be different in our C-HDP model. Similar to HDP, the finite-dimensional truncation for C-HDP is P_d^J(x_i,d) =∑_j=1^Jp_j,d^J(x_i,d) δ_θ_j^*, P^J =∑_j=1^Jp_j^Jδ_θ_j^*, where J is the truncation level and p_j,d^J(x_i,d) =q_j,dK(x_i,d|_j,d^*)/∑_k=1^J q_k,dK(x_i,d|_k,d^*), q_j,d ∼(α p_j^J, 1), p^J_1,…, p^J_J ∼( α_0/J,…, α_0/J). Note that q_j,d is not identifiable in this formulation. In fact, define w_j,d=q_j,d/∑_k=1^Jq_k,d, an alternative formulation of p_j,d^J(x_i,d) is p_j,d^J(x_i,d) =w_j,dK(x_i,d|_j,d^*)/∑_k=1^J w_k,dK(x_i,d|_k,d^*), w_1,d, …, w_J,d ∼(α p_1^J,…,α p_J^J). We provide this definition and relate it to the finite-dimensional HDP discussed in Eq (<ref>). It will be shown later that parametrization in terms of q_j,d is preferred due to the standard full conditional distribution in the case of Gibbs sampling. From <cit.> and <cit.>, it follows that p_j,d^J(x_i,d) → p_j,d(x_i,d) and P^J → P. §.§ Likelihood examples in mixture models Depending on the type of the data, different distributions can be selected for the likelihood. Below are some examples for commonly employed likelihood: Gaussian likelihood For a continuous response y_i,d=(y_i,1,d,…,y_i,G,d)^T ∈ℝ^G, a normal likelihood with cluster-specific mean _j^* and covariance _j^*: y_i,d|z_i,d=j, _j^*, _j^* ∼ (_j^*, _j^*). Vector autoregressive (VAR) model An extension of the normal likelihood to time-series data, based on a VAR model with lag 1 in the mean: y_i,d|y_i-1,d,z_i,d=j, _j^*, _j^*, _j^* ∼ (_j^*+_j^*y_i-1, _j^*) , where 𝐚_j ∈ℝ^G denotes the intercept, and 𝐁_j is a G × G matrix of the coefficients in VAR, both being cluster-specific. Negative-binomial likelihood For count data, a negative binomial likelihood with cluster-specific mean μ_j,g^* and dispersion ϕ_j,g^* in each dimension g: y_i,g,d|z_i,d=j, μ_j,g^*, ϕ_j,g^* ∼(μ_j,g^*,ϕ_j,g^*), In all cases, we have z_i,d(p_1,d^J(x_i,d),…, p_J,d^J(x_i,d)). §.§ Examples of kernels for dependent weights Below we provide a few examples of the kernel functions that will be used in the paper. Gaussian kernel K(x_i,d|_j,d^*)=exp( -( x_i,d-) ^2/2 ) where _j,d^*=(, ). The parameters and can be interpreted as the centre and dispersion of the covariate in each cluster j. Periodic kernel K(x_i,d|_j,d^*)=exp( -2/sin^2( x_i,d-/) ) where _j,d^*=(, , ). Figure <ref> shows the periodic kernel under different parameter values. The parameter represents the value that maximizes the kernel and changing will shift the kernel. The period is determined by and is related to the minimum value of the kernel. In addition, the kernel becomes more spiky as decreases. For Gaussian and periodic kernels, as →∞, the C-HDP reduces to HDP. Categorical kernel K(x_i,d|_j,d^*)=∏_l=1^L(ρ_j,d,l^*)^(x_i,d=l) for x_i,d∈{1,…,L } where _j,d^*=(ρ_j,d,1^*,…,ρ_j,d,L^*) and the probabilities ρ_j,d,l^* sum to 1. The choice of these kernels makes the denominator in Eq (<ref>) finite, ensuring P_d(x_i,d) is a valid probability measure. §.§ Suggested priors for kernel parameters in C-HDP For efficient MCMC sampling as demonstrated later, a hierarchical prior for and is suggested for a Gaussian kernel: (r_j, s^2), r_j (μ_r, σ_r^2), s^2∼(η_1, η_2), (h_j, m^2), h_j (μ_h, σ_h^2), m^2 ∼(κ_1, κ_2), where is the inverse-gamma distribution, and denotes the log-normal distribution. The prior means (r_j, h_j) for and are chosen to be cluster-specific, and (r_j, h_j) are given normal hyper-priors with global means (μ_r,μ_h) to allow for borrowing information, which is similar to the hierarchical prior for q_j,d in Eq (<ref>). Regarding a periodic kernel, the following priors are recommended: (-π/2,π/2), (r_j, s^2), (a_j,b_j), a_j=2+h_j^2/m^2, b_j=h_j^2+h_j^3/m^2, h_j (μ_h, σ_h^2), where the shape a_j and scale b_j are modelled as functions of the mean h_j and variance m^2 of the inverse-gamma prior. The hyper-priors for r_j, s^2, m^2 are of the same form as Eq (<ref>). Note that is restricted within one period (π) for identifiability. § INFERENCE In this section we describe a Gibbs sampling scheme for C-HDP mixture model. Gibbs sampling can be applied to draw posterior samples for parameters with full conditionals of a standard form. For non-standard full conditional densities, adaptive Metropolis-Hastings (AMH) can be used <cit.>. Below we highlight the key steps in constructing the Gibbs sampler. §.§ A data augmentation trick For mixture models, the complete data likelihood is widely employed to allow for efficient inference, which is usually of the form f(y_i,d,z_i,d=j|_1:J,d, _j^*, _1:J,d^*, x_i,d)=q_j,dK(x_i,d|_j,d^*)/∑_k=1^J q_k,dK(x_i,d|_k,d^*)× f(y_i,d| _j^*). However, the intractable sum in the denominator makes it difficult to obtain standard full conditional densities for q_j,d and kernel parameters. We propose to use a data augmentation trick, introducing a latent variable ξ_i,d∈ (0, +∞), and the augmented data likelihood is f(y_i,d,ξ_i,d,z_i,d=j|_1:J,d, _j^*, _1:J,d^*, x_i,d)= exp( -ξ_i,d∑_j=1^J q_j,dK(x_i,d|_j,d^*)) × q_j,dK(x_i,d|_j,d^*) × f(y_i,d| _j^*). Using the fact that ∫_0^∞exp(-ξλ) dξ=1/λ, we can restore the above complete data likelihood by integrating out ξ_i,d. It is worth noticing that ξ_i,d does not have a physical interpretation, unlike z_i,d. Define N_j,d as the number of observations in component j in dataset d and C_d the size of data d. The augmented data likelihood yields standard full conditional distributions to enable sampling both q_j,d and ξ_i,d effectively: π(q_j,d | … ) ∝( q_j,d) ^N_j,d×exp( -q_j,d∑_i=1^C_dξ_i,dK(x_i,d|_j,d^*)) ×( q_j,d) ^α p_j^J -1exp(-q_j,d) ∝( q_j,d) ^N_j,d+α p_j^J -1×exp( -q_j,d[ 1+∑_i=1^C_dξ_i,dK(x_i,d|_j,d^*)] ), ⇒ q_j,d | …∼( N_j,d+α p_j^J ,1+∑_i=1^C_dξ_i,dK(x_i,d|_j,d^*)). π(ξ_i,d | …) ∝exp( -ξ_i,d∑_j=1^J q_j,dK(x_i,d|_j,d^*)). ⇒ ξ_i,d | …∼( 1 ,∑_j=1^J q_j,dK(x_i,d|_j,d^*)). In addition to the intricate denominator, the presence of the kernel parameters inside the exponential term in Eq (<ref>) also poses challenges. For kernel parameters, we introduce another latent variable u_i,j,d∈ (0,1) to facilitate MCMC sampling. The update of _j,d^* and _j^* depends on the choice of the kernel and likelihood. If conjugate priors are chosen, the full conditional densities of _j,d^* and _j^* will have a closed form. Consider the Gaussian kernel with priors given in Eq (<ref>). The full conditional distribution for is π( | …) ∝ ∏_i: z_i,d=j K(x_i,d|_j,d^*) ×∏_i=1^C_dexp( -ξ_i,dq_j,dK(x_i,d|_j,d^*)) ×( | r_j, s^2). With the introduction of u_i,j,d∈ (0,1), the above can be written as π( | …) ∝ ∏_i: z_i,d=j K(x_i,d|_j,d^*) ×∏_i=1^C_d( u_i,j,d< M_i,j,d) ×(|r_j, s^2), where M_i,j,d=exp( -ξ_i,dq_j,dK(x_i,d|_j,d^*)). It can be shown that the full conditional of the latent variable is u_i,j,d|…∼(0,M_i,j,d), and for the kernel parameter it is a truncated normal distribution: π(|…) ∝(|r̂_j,d,ŝ_j,d^2) ×(∈ A_j,d), where ŝ_j,d^2=( 1/s^2+N_j,d/)^-1, r̂_j,d=r_j/s^2+∑_i: z_i,d=jx_i,d//1/s^2+N_j,d/, and the truncation region is of the form A_j,d=⋂_i: -logu_i,j,d < ξ_i,dq_j,d A_i,j,d, where A_i,j,d=(-∞, x_i,d-√(-2log[ -logu_i,j,d/ξ_i,dq_j,d] )) ⋃( t_i,d+√(-2log[ -logu_i,j,d/ξ_i,dq_j,d] ) , +∞). The full derivation and details of the Gibbs sampling algorithm are presented in Appendix. §.§ Clustering Given posterior samples of the allocations z_i,d, an optimal clustering is obtained that minimizes the posterior expected variation of information (VI) <cit.>. Due to label switching <cit.>, the posterior inference of the other parameters is based on a post-processing step where allocations are fixed to the optimal one. §.§ Covariate-dependent predictive quantities of interest For likelihood associated with a mean parameter with clustering property, such as μ_j^* in Section <ref>, the mean conditional on the covariate, kernel parameters, ^* and is 𝔼(y|x,, ^*, ^*)=∑_j=1^Jp_j,d^J (x) μ_j^*, where , ^*, ^* denote a collection of variables for q_j,d, μ_j^* and _j,d^*, respectively. Therefore, we can obtain covariate-dependent mean of the posterior predictive distribution as 𝔼(y|x, , ) = ∫y·π(y|x, , ) dy = ∫y∫π(y |Θ, x) π(Θ| , ) dΘ dy = ∫∫y·π(y |Θ, x) dy×π(Θ| , ) dΘ = ∫𝔼(y|x,, ^*, ^*) ×π(Θ| , ) dΘ, where x and y denote the new data, and denote the observed data, and Θ represents all the unknown parameters. The mean of the posterior predictive distribution can be approximated by the average of MCMC samples: 𝔼(y|x, , ) ≈1/L∑_l=1^L𝔼(y|x,^(l), ^*^(l), ^*^(l)), where ^(l), ^*^(l), ^*^(l) denote the l-th MCMC sample. § APPLICATION TO SINGLE-CELL RNA SEQUENCING DATA PAX6 We demonstrate the application of the C-HDP prior on single-cell RNA sequencing data, using a Gaussian kernel. The experimental datasets Pax6 <cit.> were obtained to study the influence of the transcription factor, Pax6, on the brain development in the embryonic cells from mice. Pax6 has been shown to affect a forebrain organizer and is involved in many regional gene expression pattern defects <cit.>. There are two datasets of total mRNA counts for the control group (HET) and mutant group (HOM). Both groups have counts for the same set of genes. The control group has Pax6 knocked out in one strand of DNA, while the mutant group has Pax6 removed in both strands. Each data d (d=1,2) contains the mRNA counts y_c,g,d for gene g (g=1,…,G) in cell c (c=1,…,C_d). After pre-processing in <cit.>, the HET and HOM datasets contain C_1 = 3096 and C_2 = 5282 cells, both with G = 2529 genes. The covariate of interest is the cell-specific latent time t_c,d∈ [0,1] introduced below, and the goal is to find unique and shared clusters with similar expression patterns across two groups, based on the total mRNA count matrices, while incorporating the information of the latent time. A typical drawback of scRNA-seq data is that the cells are killed during measurements being taken, and therefore the resulting scRNA-seq dataset only provides a static snapshot of cellular states. Recently, to study the dynamic information from cells, <cit.> proposed RNA velocity and latent time, derived from a per-gene model based on the amount of unspliced mRNAs and spliced mRNAs. The obtained latent time can imply the cellular position in the biological process, which may be informative to clustering the cells. For each group, the abundance of unspliced and spliced counts are obtained from velocyto pipeline <cit.> and the latent time t_c,d is computed from a generalized RNA velocity model <cit.>. §.§ The model for Pax6 The model for clustering the Pax6 data extends the work in <cit.>. <cit.> employed the HDP prior to cluster the same Pax6 datasets, where the clustering model is built upon bayNorm <cit.> that addresses the problem of normalization, imputation and batch effect correction in an integrated manner. The observed count y_c,g,d is assumed to follow a binomial distribution given the latent true count y_c,g,d^0, with cell-specific capture efficiency β_c,d y_c,g,d | y^0_c,g,d, β_c,d∼( y^0_c,g,d, β_c,d). The binomial distribution accounts for the case where partial true counts are observed. The latent counts follow a negative-binomial distribution accounting for over-dispersion: y^0_c,g,d | μ_c,g,d, ϕ_c,g,d∼( μ_c,g,d, ϕ_c,g,d ), with mean expression μ_c,g,d and dispersion ϕ_c,g,d that are both specific to each gene and cell. The latent counts can be integrated out to obtain: y_c,g,d | μ_c,g,d, ϕ_c,g,d, β_c,d∼( μ_c,g,dβ_c,d, ϕ_c,g,d ), where it is noticed that μ and β are not identifiable while only their product is. An informative prior for β_c,d is applied to mitigate this problem <cit.>. §.§.§ Kernel A Gaussian kernel is applied with kernel parameters _j,d^*=(, ). The parameters and represent the centre and variability of the latent time in each cluster j. The priors are defined as in Eq (<ref>). §.§.§ Base measure The mean and dispersion are modelled from the C-HDP prior (Eq (<ref>)), (μ_c,d, ϕ_c,d) |P^J_d(t_c,d) ∼ P^J_d(t_c,d), where μ_c,d= (μ_c,1,d,…, μ_c,G,d) and ϕ_c,d= (ϕ_c,1,d,…, ϕ_c,G,d) are the mean expression and dispersion for the c-th cell in dataset d across all genes. For the base measure P_0, <cit.> choose to model the relationship between μ_j,g^* and ϕ_j,g^* (cluster-specific parameters for gene g) as follows μ^*_j,g i.i.d∼(0, α_μ^2), ϕ^*_j,g| μ^*_j,g ( b_0 +b_1 log(μ_j,g^*), α_ϕ^2). The linear relationship between the logarithmic mean expression and dispersion has been observed in <cit.>, <cit.> and <cit.>. The value of α_μ^2 can be set using the empirical estimates for the mean parameters from bayNrom. The mean-dispersion parameters =(b_0,b_1)^T and α_ϕ^2 have hyper-priors as follows |α_ϕ^2 ∼(_b, α_ϕ^2 V_b), α_ϕ^2 ∼(ν_1, ν_2), where by default V_b=, and we use the estimated mean and dispersion parameters from bayNorm to determine _b, ν_1 and ν_2. §.§.§ Capture efficiencies β_c,d The prior for β_c,d is extended from <cit.> β_c,d Β(a^β_d, b^β_d), The values of a^β_d, b^β_d are based on the empirical estimates from bayNorm. To avoid bimodal and exponentially decaying (increasing) shape of Beta prior, we set a^β_d, b^β_d>1. For identifiability of β_c,d, an informative prior is used, where the mean is specified to be an estimate of global mean capture efficiency across cells (0.06) <cit.>. §.§.§ Concentration parameters α, α_0 The weakly informative priors are α∼(1,1), α_0 ∼(1,1). If prior information on the number of clusters is available, we can use this information to set the hyper-parameters. A simulation study for the model is given in the Appendix. §.§ Results on the real data Pax6 In practice, it is common to run multiple chains with a large number of iterations to account for sensitivity to different initial values and to ensure convergence. Nevertheless, in the case of high dimensional data, chains can easily get stuck into local posterior modes even after sufficiently long time. To overcome such problems and reduce computational costs, <cit.> develop a general method to exploit posterior distribution of data partitions, based on an ensemble of Bayesian clustering results. The method does not require the chain to reach convergence and hence is expected to relieve computational burden. Consensus clustering simply runs large numbers of chains with a small number of iterations. The key step is to choose a suitable number of chains W and the length of the chain D, such that the posterior similarity matrix (PSM) computed from the D-th iteration in W chains is stable enough. <cit.> propose a heuristic method making use of the elbow plot, by plotting the mean absolute difference (MAD) against candidate D or W, and lower MAD is favored (see Appendix). §.§.§ Clustering With a truncation level of J=15, tuning parameters W=100 and D=1000 (evidence in Figure <ref>), 10 clusters are identified from VI, shared in both HET and HOM. Figure <ref> shows the posterior similarity matrix, where there is still some uncertainty to further split the clusters. The size of each cluster is provided in the left panel of Figure <ref>, with cluster 1 having the largest number of cells. We also show the proportions of HET and HOM cells in each cluster (right panel), and find under-represented/over-represented cluster if the proportion of HET (or HOM) is less/greater than the overall proportion. In this case, clusters 2, 4, 6, 8 are found to be over-represented in HOM, clusters 3, 7, 9 are under-represented in HOM, and the rest clusters show relatively stable proportions. For the post-processing step, the total number of MCMC iteration is 28000. We use a burn-in of 20000 iterations and a thinning of 4, leading to 2000 samples in the end. Figure <ref> displays the first principal component computed from the observed gene expression matrix against latent time, with each cell colored by the posterior probability of belonging to a specific cluster. It is worth noting that some cells have posterior probabilities between 0.25-0.75 (dark blue), suggesting uncertainty in cell allocations. Additionally, some clusters appear to be well-separated by the latent time, e.g. cluster 3 and 9 in HET. §.§.§ Marker genes Using posterior samples from the post-processing step, we follow the definition in <cit.> to identify globally differentially expressed genes (DE) based on mean parameters μ_j,g^* and differentially dispersed genes (DD) based on dispersion parameters ϕ_j,g^*. DE genes are detected based on the posterior tail probabilities that the absolute value of the log-fold change (LFC) of the mean expression between two clusters j and j' is greater than a threshold τ_0. Formally, this probability is defined as P_g (j,j') = Pr( | log(μ^*_j,g/μ^*_j',g) | > τ_0 | , ), where ={ z_c,d}_c=1,d=1^C_d,D , ={ y_c,g,d} _c=1,g=1,d=1^C_d,G,D. Global DE genes are found by considering the maximum posterior tail probabilities across all pairwise clusters, P_g^* = max_(j,j') P_g (j,j'). Then genes with P_g^* greater than a threshold α_M are identified as global DE genes. This threshold is set to to control the expected false discovery rate (EFDR) to 0.05 <cit.>. Intuitively, global DE genes have mean expressions that are different between at least two clusters. Global DD genes are detected in a similar way, based on the dispersion parameters to compute maximum tail probabilities L_g^*. The thresholds for DE and for DD genes are both set to 2.5. There are 50.81% and 20.52% of the total genes classified as global DE and global DD genes. Figure <ref> displays heatmaps of the estimated mean and dispersion parameters for marker genes in all 10 clusters. Cluster 3 exhibits distinct mean expression patterns compared to the other clusters, with global DE genes showing higher expression levels. On the other hand, global DD genes in cluster 7 have smaller dispersion levels. §.§.§ Latent counts Following from Section <ref>, we compute the mean of the latent count for a cell c with unknown cluster membership, conditional on time, mean expressions, and kernel parameters: 𝔼(y_c,g,d^0|t_c,d=t,, ^*, ^*)= ∑_j=1^Jp_j,d^J (t) μ_j,g^*, which produces an estimate of the mean expression as a function of time. Figure <ref> displays the posterior mean of the mean latent count against the latent time for the top 20 global DE genes (in terms of maximum tail probabilities) for each dataset, based on the post-processing step. It is evident that the means for the top global DE genes change over time, and some genes, such as Rgs20 and Mcm3, share similar trends. The 95% highest posterior density (HPD) interval is relatively smaller in HOM. Additionally, <cit.> and <cit.> provide posterior mean of the latent counts given the allocation variables, capture efficiencies and unique parameters 𝔼[y_c,g,d^0 | y_c,g,d, z_c,d = j,β_c,d, μ^*_j,g, ϕ^*_j,g] = y_c,g,dμ^*_j,g + ϕ^*_j,g/μ^*_j,gβ_c,d+ ϕ^*_j,g +μ^*_j,gϕ^*_j,g(1- β_c,d)/μ^*_j,gβ_c,d+ ϕ^*_j,g, which can be used to approximate the posterior mean of latent counts as 𝔼[y_c,g,d^0 | ] ≈1/L∑_l=1^L [y_c,g,d^0 | y_c,g,d, z_c,d^(l) = j,β_c,d^(l), μ^* (l)_j,g, ϕ^* (l)_j,g]. Figure <ref> shows the t-SNE <cit.> plots for the observed counts and estimated latent counts from Eq (<ref>) using samples in the post-processing step. From the observed counts, some clusters are already quite separated, such as the purple and dark green clusters. The separation is much more apparent in the latent counts. §.§.§ Time-dependent probabilities For time-dependent probabilities, due to label switching, our inference is based on the post-processing step. Figure <ref> shows 100 posterior samples of p_j,d^J(t) for each cluster in each dataset. Compared to HOM, its uncertainty in HET is larger, which may be attributed to the smaller data size of HET. Additionally, for some clusters such as cluster 1 in HET, the probabilities appear to be bimodal, suggesting that cells with different latent time may still have relatively high probabilities of belonging to the same cluster. Notably, cluster 3 and cluster 9 in HET have almost non-overlapping support for latent time, where probabilities are positive. This observation aligns with our previous discussion in Figure <ref>. Finally, posterior predictive checks have been conducted <cit.> based on replicate data, and there is no strong disagreement between the data and model (see Appendix). § CONCLUSION In this paper we have developed a covariate-dependent hierarchical Dirichlet process prior to flexibly integrate external covariates into clustering across related groups, combining the strengths from HDP and “single-atoms” DDP. The method has been applied to a real dataset on single-cell RNA sequencing with the use of a Gaussian kernel. The results demonstrate that our C-HDP prior yields meaningful clusters for both datasets. The identified clusters reveal separation in the lower dimensional embeddings of the scRNA-seq data. In particular, the covariate-dependent probabilities enhance our understanding of the influence of external covariates on clustering. Further, we provide estimation of the latent counts as a function of the covariate (time) in the scRNA-seq data. However, while we have constructed an efficient Gibbs sampling algorithm for posterior inference, this algorithm may still face the dilemma of getting trapped in the local modes and a large number of iteration is needed to reach convergence. The problem is even more severe for high-dimensional scRNA-seq data. Alternative methods will be considered in the future, and one option is posterior bootstrap <cit.> suitable for multimodal posteriors. The method accounts for model misspecification and generates independent samples from the nonparametric posterior with exact inference. It admits parallel Monte Carlo sampling scheme for faster computation than conventional MCMC algorithm. Finally, it is worth extending the model to encompass covariate-dependent atoms as well, enhancing its applicability to more complex datasets. 0.2in § POSTERIOR INFERENCE FOR PAX6 Let _j,d^*=(, ) denote the parameters in the Gaussian kernel. The complete model is as follows: y_c,g,d | z_c,d=j, μ_j,g^*, ϕ_j,g^*, β_c,d ( μ_j,g^* β_c,d, ϕ_j,g^* ), z_c,d | p_1,d^J(t_c,d), …, p_J,d^J(t_c,d) (p_1,d^J(t_c,d),…, p_J,d^J(t_c,d)), p_j,d^J(t_c,d) =q_j,dK(t_c,d|_j,d^*)/∑_k=1^J q_k,dK(t_c,d|_k,d^*), q_j,d (α p_j^J, 1), p^J_1,…, p^J_J ∼( α_0/J,…, α_0/J), (r_j, s^2), r_j (μ_r, σ_r^2), s^2 ∼(η_1, η_2), (h_j, m^2), h_j (μ_h, σ_h^2), m^2 ∼(κ_1, κ_2), μ^*_j,g (0, α_μ^2), ϕ^*_j,g| μ^*_j,g ( b_0 +b_1 log(μ_j,g^*), α_ϕ^2), β_c,d Β(a^β_d, b^β_d), |α_ϕ^2 ∼(_b, α_ϕ^2 V_b), α_ϕ^2 ∼(ν_1, ν_2), α ∼(1,1), α_0 ∼(1,1). Define ={ z_c,d}_c=1,d=1^C_d,D , ={ y_c,g,d} _c=1,g=1,d=1^C_d,G,D,={ t_c,d}_c=1,d=1^C_d,D, ={ q_j,d} _j=1,d=1^J,D, ^J=(p_1^J,…,p_J^J), μ_j^*=(μ_j,1^*,…,μ_j,G^*), ϕ_j^*=(ϕ_j,1^*,…,ϕ_j,G^*), ={β_c,d}_c=1,d=1^C_d,D,={ξ_c,d}_c=1,d=1^C_d,D,=(b_0,b_1)^T, ^*={}_j=1,d=1^J,D,^*^2={}_j=1,d=1^J,D , =(r_1,…,r_J), =(h_1,…,h_J). The posterior distribution is π(,,^J, ^*, ^*, , ,α, α_0, , α_ϕ^2, ^*,^*^2, , s^2,, m^2|,) ∝ ∏_j=1^J ∏_(c,d): z_c,d=j∏_g=1^G (y_c,d,g | μ_j,g^* β_c,d, ϕ_j,g^* ) ×∏_j=1^J ∏_d=1^D ∏_c: z_c,d=j K(t_c,d|_j,d^*) ×∏_j=1^J ∏_d=1^D ( q_j,d) ^N_j,d ×∏_j=1^J ∏_d=1^D ∏_c=1^C_dexp( -ξ_c,dq_j,dK(t_c,d|_j,d^*)) ×∏_j=1^J ∏_d=1^D (q_j,d | α p_j^J,1) ×(^J | α_0/J,…, α_0/J) ×∏_j=1^J ∏_g=1^G [ ( μ^*_j,g | 0, α_μ^2) ×( ϕ^*_j,g | b_0 +b_1 log(μ_j,g^*) , α_ϕ^2)] ×∏_d=1^D ∏_c=1^C_dΒ(β_c,d | a^β_d, b^β_d) ×(α | 1,1) ×(α_0 | 1,1) ×( | m_b, α_ϕ^2 V_b)×(α_ϕ^2 | ν_1, ν_2) ×∏_j=1^J ∏_d=1^D [ ( | r_j, s^2)×( | h_j,m^2)] ×∏_j=1^J [ (r_j | μ_r,σ_r^2)×(h_j| μ_h, σ_h^2)] ×(s^2 | η_1,η_2) ×(m^2 | κ_1,κ_2), where N_j,d=∑_c=1^C_d(z_c,d=j) is the number of cells in component j in dataset d, (·) is the indicator function that takes the value 1 if the condition inside the bracket holds, and is 0 otherwise. The first three lines come from the augmented data likelihood. The MCMC algorithm (Gibbs sampling) iteratively samples from the full conditional distributions of (blocked) parameters. For standard full conditional densities, we can draw samples directly, while adaptive Metropolis-Hastings (AMH) is used for non-standard forms. Denote C=(C_1,…,C_D). The time complexity for each parameter is provided below. * dataset-specific parameters for cluster likelihood : ((C)J). * latent cell-specific parameters : ((C)J). * latent parameters { u_c,j,d} _c=1,j=1,d=1^C_d,J,D to aid sampling kernel parameters: ((C)J). * kernel parameters (mean) ^*: ((C)J). * kernel parameters (variance) ^*^2: ((C)J). * concentration parameter α: (JD). * concentration parameter α_0: (J). * allocation variables : ((C)JG). * component probabilities ^J: (JD). * mean-dispersion parameters , α_ϕ^2: (JG). * cluster-specific parameters _1:J,1:G^*, _1:J,1:G^*: ((C)JG). * capture efficiencies : ((C)J). * hyper-parameters , s^2, , m^2 for the kernel parameters: (JD). §.§ Dataset-specific parameters for cluster likelihood q_j,d For each j and d, the full conditional distribution is π(q_j,d | { z_c,d} _c=1^C_d ,α,p_j^J, {ξ_c,d} _c=1^C_d , { t_c,d} _c=1^C_d , , ) ∝ ( q_j,d) ^N_j,d×exp( -q_j,d∑_c=1^C_dξ_c,dK(t_c,d|_j,d^*)) ×( q_j,d) ^α p_j^J -1exp(-q_j,d) ∝ ( q_j,d) ^N_j,d+α p_j^J -1×exp( -q_j,d[ 1+∑_c=1^C_dξ_c,dK(t_c,d|_j,d^*)] ), i.e., q_j,d | …∼( N_j,d+α p_j^J ,1+∑_c=1^C_dξ_c,dK(t_c,d|_j,d^*)). §.§ Latent cell-specific parameters ξ_c,d For each c and d, the full conditional distribution is π(ξ_c,d | _1:J,d, t_c,d, _1:J,d^*, _1:J,d^*^2) ∝exp( -ξ_c,d∑_j=1^J q_j,dK(t_c,d|_j,d^*)), i.e., ξ_c,d | …∼( 1 ,∑_j=1^J q_j,dK(t_c,d|_j,d^*)). §.§ Kernel parameters and The joint full conditional distribution for ^* and ^*^2 is π(^*, ^*^2 | , , , , , s^2, , m^2) ∝ ∏_j=1^J ∏_d=1^D ∏_c: z_c,d=j K(t_c,d|_j,d^*) ×∏_j=1^J ∏_d=1^D ∏_c=1^C_dexp( -ξ_c,dq_j,dK(t_c,d|_j,d^*)) ×∏_j=1^J ∏_d=1^D [ ( | r_j, s^2)×( | h_j,m^2)]. Due to the presence of the exponential term, it is impossible to obtain standard distributions. We introduce latent variables 𝐮={ u_c,j,d} _c=1,j=1,d=1^C_d,J,D∈ (0,1). Furthermore, for simplicity, we write M_c,j,d=exp( -ξ_c,dq_j,dK(t_c,d|_j,d^*)) and the joint full conditional distribution becomes π(^*, ^*^2, 𝐮|…)∝ ∏_j=1^J ∏_d=1^D ∏_c: z_c,d=j K(t_c,d|_j,d^*) ×∏_j=1^J ∏_d=1^D∏_c=1^C_d𝕀( u_c,j,d< M_c,j,d) ×∏_j=1^J ∏_d=1^D [ (|r_j, s^2)×(|h_j,m^2)]. By integrating over u_c,j,d on (0,1), we can obtain the previous joint full conditional distribution. Now we also need to sample u_c,j,d in addition to and . §.§.§ Latent parameters u_c,j,d For each c,j and d, the full conditional distribution is π(u_c,j,d|ξ_c,d, q_j,d, t_c,d, , )∝𝕀( u_c,j,d< M_c,j,d), i.e., u_c,j,d|…∼(0,exp( -ξ_c,dq_j,dK(t_c,d|_j,d^*)) ). §.§.§ Mean parameters For each j and d, the full conditional distribution is π(|r_j, s^2, { z_c,d} _c=1^C_d, {ξ_c,d} _c=1^C_d, { u_c,j,d} _c=1^C_d, { t_c,d} _c=1^C_d, q_j,d, ) ∝ ∏_c: z_c,d=j K(t_c,d|_j,d^*) ×(|r_j, s^2) ×𝕀(∈ A_j,d). Let I_j,d={ c: z_c,d=j}. The first two terms are proportional to exp[ -1/2∑_I_j,d( -t_c,d) ^2 ] ×exp[ -1/2s^2( -r_j) ^2 ] ∝ exp[ -1/2(N_j,d^2-2∑_I_j,dt_c,d) -1/2s^2( ^2-2 r_j) ] ∝ exp( -1/2 s^2[ ( +N_j,ds^2) ^2-2( r_j +s^2∑_I_j,dt_c,d) ] ) ∝ (|r̂_j,d,ŝ_j,d^2), where ŝ_j,d^2 =( 1/s^2+N_j,d/)^-1, r̂_j,d =r_j/s^2+∑_I_j,dt_c,d//1/s^2+N_j,d/. The indicator function 𝕀(∈ A_j,d) truncates this normal distribution for , which results from the indicator function in Eq (<ref>). The region A_j,d is A_j,d =⋂_c=1^C_dA_c,j,d=⋂_c=1^C_d{: u_c,j,d < exp( -ξ_c,dq_j,dK(t_c,d|_j,d^*))} =⋂_c=1^C_d{: -logu_c,j,d/ξ_c,dq_j,d> exp( -1/2 ( t_c,d-) ^2 ) } =⋂_c=1^C_d{: log[ -logu_c,j,d/ξ_c,dq_j,d] > -1/2 ( t_c,d-) ^2}. Since the right-hand side is always negative, if -logu_c,j,d/ξ_c,dq_j,d≥ 1 ⇒ -logu_c,j,d≥ξ_c,dq_j,d, we have A_c,j,d=ℝ and hence there is no truncation. Otherwise, A_c,j,d=(-∞, t_c,d-√(-2log[ -logu_c,j,d/ξ_c,dq_j,d] )) ⋃( t_c,d+√(-2log[ -logu_c,j,d/ξ_c,dq_j,d] ) , +∞). Hence the region A_j,d is given by A_j,d=⋂_c: -logu_c,j,d < ξ_c,dq_j,d A_c,j,d. Combining all terms together, the full conditional distribution of is a truncated normal distribution | …∼(r̂_j,d,ŝ_j,d^2). with truncation region A_j,d. Note that if there is no cell in dataset d that belongs to component j, we will sample from the prior truncated to A_j,d. Furthermore, if it satisfies that { c: -logu_c,j,d < ξ_c,dq_j,d} = ∅, there is no truncation. Therefore, there are actually four possible cases, based on truncation or not and whether or not the component j is empty in dataset d. §.§.§ Variance parameters For each j and d, the full conditional distribution is π(|h_j, m^2,{ z_c,d} _c=1^C_d, {ξ_c,d} _c=1^C_d, { u_c,j,d} _c=1^C_d, { t_c,d} _c=1^C_d, q_j,d,) ∝ ∏_c: z_c,d=j K(t_c,d|_j,d^*) ×(|h_j,m^2) ×𝕀(∈ B_j,d). The first two terms are proportional to exp[ -1/2∑_I_j,d( -t_c,d) ^2 ] ×1/exp[ -1/2m^2( log()-h_j) ^2 ], which is not a standard form and hence we will apply adaptive Metropolis-Hastings. The region B_j,d is given by B_j,d =⋂_c=1^C_dB_c,j,d=⋂_c=1^C_d{: u_c,j,d < exp( -ξ_c,dq_j,dK(t_c,d|_j,d^*))} =⋂_c=1^C_d{: -logu_c,j,d/ξ_c,dq_j,d> exp( -1/2 ( t_c,d-) ^2 ) } =⋂_c=1^C_d{: log[ -logu_c,j,d/ξ_c,dq_j,d] > -1/2 ( t_c,d-) ^2}. Similar to , if -logu_c,j,d/ξ_c,dq_j,d≥ 1 ⇒ -logu_c,j,d≥ξ_c,dq_j,d, we have B_c,j,d=ℝ^+ and hence there is no truncation. Otherwise, B_c,j,d=(0, -( t_c,d-) ^2/2log[ -logu_c,j,d/ξ_c,dq_j,d] ). Hence the region B_j,d is B_j,d=⋂_c: -logu_c,j,d < ξ_c,dq_j,d B_c,j,d = ( 0, σ_j,d^+), where σ_j,d^+=min_c: -logu_c,j,d < ξ_c,dq_j,d -( t_c,d-) ^2/2log[ -logu_c,j,d/ξ_c,dq_j,d] . Below we will describe the AMH for . AMH for The adaptive Metropolis-Hastings algorithm we adopt is based on Algorithm 4 of Chapter 7 in <cit.>. The AMH is the same for each j and d, and hence for simplicity, we will drop the subscript j,d in this section. * Apply the following transformation to : X=g()=-log( 1/-1/σ^+) ∈ℝ. The Jacobian term is J_x=dX/d=σ^+/(σ^+-). The inverse transformation is =1/exp(-x)+1/σ^+∈( 0,σ^+). * Let denote the dimension of X (=1 for the case of ). Suppose the current iteration is n and the sampled from iteration n-1 is σ_old^*^2. Conditional on σ^+ at the current iteration, we define X_old=g(σ_old^*^2). We use random walk to sample X_new. For n ≤ 100, we sample X_new from (X_old, 0.01 ×_). For n> 100, denote s_=2.4^2/, then we propose X_new∼(X_old, s_× (Σ_n-1+ϵ_d)), and σ_new^*^2 is obtained using the inverse transformation. For the rest of this supplement, we will use Q_n to denote the proposal distribution at step n. For all the AMH we perform, we use ϵ=0.01, and Σ_n-1 is the sample covariance (or variance) based on the past n-1 sampled X, which needs to be computed at every iteration. To avoid large-matrix computation, <cit.> use a recursive formulae to update Σ_n after the n-th iteration. The details are provided later. * Since this is a Metropolis-Hastings step, we need to compute the acceptance probability to decide to accept σ_new^*^2 or not. Let π(σ^*^2) denote the posterior distribution. The acceptance probability is α(σ_new^*^2,σ_old^*^2) =min( 1, π(σ_new^*^2)Q_n(σ_old^*^2|σ_new^*^2)/π(σ_old^*^2)Q_n(σ_new^*^2|σ_old^*^2)) =min( 1,π(σ_new^*^2)| J_x_old|/π(σ_old^*^2)| J_x_new|), where π(σ_new^*^2)/π(σ_old^*^2) is given by Eq (<ref>) evaluated at the new and old σ^*^2, and the determinant of the Jacobian | J_x | is provided in step 1, conditional on σ^+ in the current iteration: | J_x_old|/| J_x_new|=σ_new^*^2( σ^+-σ_new^*^2) /σ_old^*^2( σ^+-σ_old^*^2). In practice, the log acceptance probability is used. Taking the logarithm of Eq (<ref>) and Eq (<ref>) yields lpost = -1/2∑_I_j,d( t^*-t_c,d) ^2 - log( ) -1/2m^2( log()-h_j) ^2, log( | J_x_old|/| J_x_new|) = log( σ_new^*^2) + log(σ^+-σ_new^*^2) -log( σ_old^*^2) - log(σ^+-σ_old^*^2) , where lpost denotes the posterior distribution on the log scale. * After making the decision to accept the proposed value or not, we will compute the sample covariance/variance Σ_n. For =1, the variance Σ_n is computed based on 2 statistics: M_2(n-1) and X_n-1 from the previous n-1 samples, and the new value x_n. The definition of the two statistics are M_2(n) = ∑_i=1^n (x_i - x_n)^2, x_n = 1/n∑_i=1^n x_i. The following relationship is observed between X_n and X_n-1, and between M_2(n) and M_2(n-1): x_n = (1-1/n) x_n-1 + x_n/n, and Σ_n = 1/n-1∑_i=1^n (x_i - x_n)^2 = 1/n-1M_2(n) = 1/n-1 [M_2(n-1) + (x_n - x_n-1)(x_n - x_n)]. The proof of Eq (<ref>) is as follows: (n-1)Σ_n - (n-2)Σ_n-1 = ∑_i=1^n (x_i - x_n)^2 - ∑_i=1^n-1 (x_i - x_n-1) = (x_n - x_n)^2 + ∑_i=1^n-1 ((x_i - x_n)^2 - (x_i - x_n-1)^2) = (x_n - x_n)^2 + ∑_i=1^n-1 (x_i - x_n + x_i - x_n-1)(x_n-1 - x_n) = (x_n - x_n)^2 + (x_n - x_n)(x_n-1 - x_n) = (x_n - x_n)(x_n - x_n - x_n-1 + x_n) = (x_n - x_n)(x_n - x_n-1). Hence we first compute x_n from x_n-1 and x_n. Then x_n, x_n-1,x_n and M_2(n-1) are used for the calculation of Σ_n. For >1, we will compute Σ_n based on 2 statistics: S(n-1) and m(n-1) given by the previous n-1 simulations, and the new sample x_n=(x_n,1,…,x_n,). S(n-1) is a symmetric matrix of dimension ×, defined by the following: S(n-1) = [ ∑_i=1^n-1 (x_i,1)^2 ∑_i=1^n-1 x_i,1x_i,2 ⋯ ∑_i=1^n-1 x_i,1x_i,; ⋮ ⋮ ⋱ ⋮; ∑_i=1^n-1 x_i, x_i,1 ∑_i=1^n-1 x_i,x_i,2 ⋯ ∑_i=1^n-1 x_i,x_i, ], where m(n-1) is a -dimension vector, with each component m_'(n-1) denoting the mean of x_' from the first (n-1) samples: m(n-1) = ( m_1(n-1), …, m_(n-1)). The element in the covariance matrix of X after n iterations (Σ_n) is given by Σ_n(u,v) = 1/n-1∑_i=1^n (x_i,u-m_u(n))(x_i,v-m_v(n)) = 1/n-1[ ∑_i=1^n x_i,ux_i,v - m_v(n)∑_i=1^n x_i,u - m_u(n)∑_i=1^n x_i,v + n × m_v(n)m_u(n) ] = 1/n-1[ ∑_i=1^n x_i,ux_i,v -n × m_v(n)m_u(n) ] = 1/n-1∑_i=1^n x_i,ux_i,v -n/n-1m_v(n)m_u(n). Hence the covariance matrix could be written in the following form: Σ_n = 1/n-1S(n) - n/n-1( m(n)^T m(n)) , and we note the following relationship: S(n) = S(n-1) + x_n^T x_n, m(n) = (1-1/n)m(n-1) + 1/nx_n. Therefore, we first compute S(n) and m(n) based on S(n-1), m(n-1) and the new value x_n. Then the covariance matrix can be calculated from S(n) and m(n). Finally, we remark that the full conditional density of also has four possible forms, depending on truncation or not, and whether component j in dataset d is empty or not. The above AMH will be applied when the component j is occupied. Besides, in this case, if there is no truncation (the upper bound σ_j,d^+=∞), the transformation defined in step 1 will reduce to a simple log-transformation, and the Jacobian is simply 1/. When component j is empty in some MCMC iterations, we will draw samples from the log-normal prior (may or may not be truncated). For such iterations, all the samples are accepted with probability 1, and are transformed to X to update the covariance/variance. §.§ Concentration parameters α and α_0 §.§.§ Concentration parameter α The full conditional distribution of α is π(α|,^J) ∝∏_j=1^J ∏_d=1^D (q_j,d|α p_j^J,1) ×(α|1,1) ∝∏_j=1^J ∏_d=1^D [ 1/Γ(α p_j^J ) ( q_j,d) ^α p_j^J]×exp(-α). The distribution is not of a standard form and we apply the AMH as described in section <ref>. Specifically, we use the log-transformation X=log(α) ∈ℝ. The Jacobian is J_x=dX/dα=1/α. and the inverse transformation is α=exp(x). We use the random walk to sample a new X as stated before. The logarithm of the full conditional density is lpost= -α + ∑_j=1^J∑_d=1^D [ α p_j^J log(q_j,d) - log( Γ( α p_j^J) ) ]. Hence the acceptance probability of the new sample is α(α_new,α_old) =min( 1, π(α_new)Q_n(α_old|α_new)/π(α_old)Q_n(α_new|α_old)) =min( 1, π(α_new)α_new/π(α_old)α_old) =min(1, exp[lpost_new-lpost_old+log(α_new)-log(α_old)] ). After the decision of rejection or acceptance, we update the sample variance following step 4 of section <ref> (=1). §.§.§ Concentration parameter α_0 The full conditional distribution of α is π(α_0|^J) ∝(α_0 | 1,1) ×(p^J | α_0/J, …, α_0/J) ∝exp(-α_0) ×Γ(α_0)/[Γ(α_0/J)]^J∏_j=1^J ( p_j^J) ^α_0/J. Same log-transformation is applied X=log(α_0) ∈ℝ, with Jacobian J_x=dX/dα_0=1/α_0, and the inverse transformation is α_0=exp(x). The logarithm of the full conditional distribution is lpost= -α_0 + log(Γ(α_0)) - Jlog( Γ( α_0/J) ) + α_0/J∑_j=1^J log(p_j^J). The acceptance probability is α(α_0,new,α_0,old) =min( 1, π(α_0,new)Q_n(α_0,old|α_0,new)/π(α_0,old)Q_n(α_0,new|α_0,old)) =min( 1, π(α_0,new)α_0,new/π(α_0,old)α_0,old) =min(1, exp[lpost_new-lpost_old+log(α_0,new)-log(α_0,old)] ). The variance is updated similarly to α. §.§ Allocation variables z_c,d We notice that z_c,d only plays a role though the augmented data likelihood for every c and d. Hence z_c,d is independent across c,d, yielding π(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,) ∝∏_g=1^G (y_c,g,d | μ_j,g^* β_c,d, ϕ_j,g^*) × q_j,dK(t_c,d|_j,d^*). Denote p̃(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,)= ∏_g=1^G (y_c,g,d | μ_j,g^* β_c,d, ϕ_j,g^*) × q_j,dK(t_c,d|_j,d^*). The full conditional distribution is then π(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,) =K̃p̃(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,)/K̃∑_l=1^Jp̃(z_c,d =l | _1:J,1:G^*, _1:J,1:G^*, , , ,). It is possible that the sum in the denominator can be very small. To avoid computational problem, we remove the most extreme probability log( K̃) = -max_jlog( p̃(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,)) . In all, we sample z_c,d from { 1,…,J} according to π(z_c,d =j | _1:J,1:G^*, _1:J,1:G^*, , , ,). This is repeated for every c and d. §.§ Component probabilities p_j^J The full conditional distribution is π(p_1^J,…,p_J^J|,α,α_0) ∝∏_j=1^J ∏_d=1^D (q_j,d|α p_j^J,1)×(p_1^J,…,p_J^J| α_0/J, …, α_0/J) ∝∏_j=1^J ∏_d=1^D[ 1/Γ(α p_j^J ) ( q_j,d) ^α p_j^J]×∏_j=1^J ( p_j^J) ^α_0/J-1, where p_j^J cannot be separated with each other and the distribution is also non-standard. Therefore, we apply AMH as in <cit.>. Because p_j^J (j=1,…,J) sum to one, the following transformation is made to obtain ∈ℝ^J-1: X_j = log(P_j/P_J), j = 1, …, J-1. The inverse transformation is given by p_j^J = exp(x_j)/1+∑_j=1^J-1exp(x_j), j = 1, …, J-1, p_J^J = 1-∑_j=1^J-1p_j^J=1/1+∑_j=1^J-1exp(x_j). As for the Jacobian matrix, it is given by: J_ = [ dx_1/dp_1 dx_2/dp_1 ⋯ dx_J-1/dp_1; ⋮ ⋮ ⋱ ⋮; dx_1/dp_J-1 dx_2/dp_J-1 ⋯ dx_J-1/dp_J-1 ] = [ 1/p_1 + 1/p_J ⋯ 1/p_J; ⋮ ⋱ ⋮; 1/p_J ⋯ 1/p_J-1 + 1/p_J ] = [ 1/p_J … 1/p_J; ⋮ ⋱ ⋮; 1/p_J … 1/p_J ] + [ 1/p_1 0 … 0; 0 1/p_2 ⋮ 0; ⋮ ⋮ ⋱ 0; 0 0 … 1/p_J-1 ] = B + A. Because (A + B) = (A) + (B) + Tr(A^-1 B)(A), (B) = 0 and (A) = ∏_j=1^J-11/p_j, it follows that (A+B) = ∏_j=1^J-11/p_j + (1-p_J) ∏_j=1^J1/p_j = ∏_j=1^J 1/p_j. Therefore, log |J_| = log[∏_j=1^J 1/p_j] = -∑_j=1^J log (p_j). The log full conditional distribution is lpost= ∑_j=1^J∑_d=1^D [α p_j^J log (q_j,d) - logΓ(α p_j^J) ]+ ∑_j=1^J [ ( α_0/J - 1) log (p_j^J) ]. Combining all terms together, the acceptance probability is α(^J_new, ^J_old) = min( 1,π(^J_new)Q_n(^J_old | ^J_new)/π(^J_old)Q_n(^J_new | ^J_old)) = min( 1, π(^J_new)|J__old)|/π(^J_old)|J__new)|) = min( 1, exp[lpost_new - lpost_old + ∑_j=1^J(log( p_j,new^J) - log( p_j,old^J) ) ]). We mention that the sampling of a new transformed variable _new is slightly different from step 2 in section <ref>. Instead of a fixed scale parameter s_=2.4^2/ (=J-1 for the case of ^J), s_ is also updated at each iteration. The idea is to adapt s_ to achieve an average acceptance probability of 0.234 (Algorithm 6 of Chapter 7 in <cit.>), which is designed for multivariate target distribution. Let the initial value s_^(1)=0.001. Suppose the current iteration is n, and _new is the new sample after the decision of rejection or not. Define λ^(n)=exp(log( s_^(n)) + n^-0.7×( α(^J_new, ^J_old)-0.234) ), then s_^(n+1)=λ^-, if λ^(n)<λ^-, λ^(n), if λ^(n)∈[ λ^-,λ^+], λ^+, if λ^(n)>λ^+, where λ^-=exp(-50) and λ^+=exp(50). The update of the covariance matrix follows from step 4 (multivariate case) in section <ref>. §.§ Mean-dispersion parameters and α_ϕ^2 The joint distribution of =(b_0,b_1)^T and α_ϕ^2 is π(b, α_ϕ^2 | _1:J,1:G^*, _1:J,1:G^*) ∝ (b | _b, α_ϕ^2 V_b) ×(α_ϕ^2 | v_1, v_2) ×∏_j=1^J ∏_g=1^G (ϕ_j,g^* | b_0 + b_1 log(μ_j,g^*), α_ϕ^2) ∝ ( α_ϕ^2 )^-( v_1+2+JG/2) ×exp( -1/α_ϕ^2[1/2∑_j=1^J ∑_g=1^G ( log(ϕ_j,g^*)-b_0-b_1log(μ_j,g^*)) ^2 + 1/2( b_0^2 + b_1^2) -^T_b+1/2_b^T_b + v_2] ). For , we have π( b | ^*, ^*, α_ϕ^2) ∝exp( -1/2α_ϕ^2[ ∑_j=1^J ∑_g=1^G (log(ϕ_j,g^*)-b_0-b_1log(μ_j,g^*))^2 + ( b_0^2 + b_1^2)-2^T_b ] ), which can be written in terms of matrix notation as ∑_j=1^J ∑_g=1^G (log(ϕ_j,g^*)-b_0-b_1log(μ_j,g^*))^2 = ∑_j=1^J (log(ϕ_j^*) - μ_j )^T (log(ϕ_j^*) - μ_j ), where log(ϕ_j^*) has dimension G × 1, μ_j has dimension G × 2 and b has dimension 2 × 1: log(ϕ_j^*) = [ log (ϕ_j,1^*); ⋮; log (ϕ_j,G^*) ], μ_j = [ 1 log (μ_j,1^*); ⋮ ⋮; 1 log (μ_j,G^*) ], b = [ b_0; b_1 ]. The above equation is equivalent to ∑_j=1^J [log(ϕ_j^*)^T log(ϕ_j^*) - 2b^T μ_j^T log(ϕ_j^*) + b^T μ_j^T μ_j b] = ∑_j=1^J [ log(ϕ_j^*)^T log(ϕ_j^*) ] - 2b^T ∑_j=1^J μ_j^T log(ϕ_j^*) + b^T( ∑_j=1^J μ_j^T μ_j ) b. Therefore, π( b | ^*, ^*, α_ϕ^2) ∝ exp(-1/2 α_ϕ^2[b^T (∑_j=1^J μ_j^T μ_j + ) b - 2b^T ( ∑_j=1^J μ_j^T log(ϕ_j^*) +_b) + ∑_j=1^J [ log(ϕ_j^*)^T log(ϕ_j^*) ] ] ), i.e., b | …∼ (_b, α_ϕ^2 V_b), where _b = ( ∑_j=1^J μ_j^T μ_j + )^-1( ∑_j=1^J μ_j^T log(ϕ_j^*)+_b ), V_b = ( ∑_j=1^J μ_j^T μ_j + )^-1. As for α_ϕ^2 | ^*, ^*, a closed form can be obtained by integrating out in the joint distribution as shown below. π(α_ϕ^2 | ^*, ^* ) = ∫π( b, α_ϕ^2 | ^*, ^* ) db ∝∫(1/α_ϕ^2)^v_1+1exp( -v_2/α_ϕ^2) ( 1/α_ϕ^2)^JG/2( 1/α_ϕ^2) ×exp(-1/2α_ϕ^2[ (-_b)^T V_b^-1 (-_b) - _b^T V_b^-1_b + ∑_j=1^J log(ϕ_j^*)^T log(ϕ_j^*)+ _b^T_b ]) db. Now since ∫1/α_ϕ^2exp( -1/2α_ϕ^2 (-_b)^T V_b^-1 (-_b) ) db = Constant, it follows that π(α_ϕ^2 | ^*, ^*) ∝(1/α_ϕ^2)^v_1+1exp( -v_2/α_ϕ^2) ( 1/α_ϕ^2)^JG/2 ×exp( -1/2α_ϕ^2[ - _b^T V_b^-1_b + ∑_j=1^J log(ϕ_j^*)^T log(ϕ_j^*) +_b^T_b] ) ∝( 1/α_ϕ^2) ^v_1+1+JG/2 ×exp( -1/α_ϕ^2[ v_2 + 1/2( ∑_j=1^J log(ϕ_j^*)^T log(ϕ_j^*) - _b^T V_b^-1_b +_b^T_b) ] ). Therefore, α_ϕ^2 | ^*, ^* ∼IG (v_1, v_2), where v_1 = v_1 + JG/2 , v_2 = v_2 + 1/2( ∑_j=1^J log(ϕ_j^*)^T log(ϕ_j^*) - _b^T V_b^-1_b +_b^T_b). §.§ Cluster-specific parameters μ_j,g^* and ϕ_j,g^* From the overall posterior distribution (Eq (<ref>)), we notice that, for each j and g, π(μ_j,g^*, ϕ_j,g^* |,,α_ϕ^2,,) ∝ ( μ^*_j,g | 0, α_μ^2) ×( ϕ^*_j,g | b_0 +b_1 log(μ_j,g^*) , α_ϕ^2) ×∏_(c,d): z_c,d=j(y_c,d,g | μ_j,g^* β_c,d, ϕ_j,g^* ) ∝ ( 1/μ_j,g^* ϕ_j,g^*) exp( -1/2 α_μ^2 (logμ_j,g^*)^2 -1/2 α_ϕ^2 (logϕ_j,g^* - (b_0 + b_1logμ_j,g^*))^2) × ∏_(c,d):z_c,d =jy_c,g,d + ϕ_j,g^* - 1ϕ_j,g^*-1( ϕ_j,g^*/μ_j,g^* β_c,d + ϕ_j,g^*)^ϕ_j,g^*( μ_j,g^*/μ_j,g^* β_c,d + ϕ_j,g^*)^y_c,g,d. This is not a standard distribution and hence we will apply AMH to sample for (μ_j,g^*,ϕ_j,g^*). For clarity, we will drop the subscript j and g here. The transformation is =(X_1,X_2)=( log(μ^*), log(ϕ^*)) ∈ℝ^2, with inverse transformation μ^*=exp(x_1), ϕ^*=exp(x_2). The Jacobian is J_= [ dx_1/dμ^* dx_1/dϕ^*; dx_2/dμ^* dx_2/dϕ^* ] = [ 1/μ^* 0; 0 1/ϕ^* ], so |J_|=1/μ^*ϕ^*. The logarithm of the full conditional distribution is lpost= -log (μ_j,g^*ϕ_j,g^*) -1/2 α_μ^2 (logμ_j,g^*)^2 -1/2 α_ϕ^2 (logϕ_j,g^* - (b_0 + b_1logμ_j,g^*))^2 + ∑_(c,d):z_c,d=jlogy_c,g,d + ϕ_j,g^* - 1ϕ_j,g^*-1 + ϕ_j,g^* log( ϕ_j,g^*/μ_j,g^* β_c,d + ϕ_j,g^*) + y_c,g,dlog( μ_j,g^*/μ_j,g^* β_c,d + ϕ_j,g^*). Combining all terms together, the acceptance probability is α( (μ^*,ϕ^*)_new, (μ^*,ϕ^*)_old) = min( 1, exp[ lpost_new - lpost_old - log(μ_old^* ϕ_old^*) + log(μ_new^* ϕ_new^*) ] ). Then the covariance matrix is updated following the multivariate case (=2) in step 4 of section <ref>. Note that due to label switching, the covariance matrix may have very large values. Therefore, to mitigate the multiplicative effect of the scale parameter s_ on the covariance, we fix s_=1 instead of s_=2.4^2/2. The above step is repeated for every j and g. §.§ Capture efficiencies β_c,d From the posterior distribution (Eq (<ref>)), we can separate each β_c,d and obtain its full conditional density as π( β_c,d |{ y_c,g,d} _g=1^G, z_c,d=j, _1:J,1:G^*, _1:J,1:G^*) ∝ Β(β_c,d | a_d^β, b_d^β) ×∏_g=1^G (y_c,g,d | μ_j,g^* β_c,d, ϕ_j,g^*) ∝ (β_c,d)^a_d^β -1 (1-β_c,d)^b_d^β - 1 ×[ ∏_g=1^G ( 1/ϕ_j,g^* + μ_j,g^* β_c,d)^ϕ_j,g^* + y_c,g,d (β_c,d)^y_c,g,d]. This does not have a closed form and we will apply AMH with the following variable transformation X=log( β_c,d/1-β_c,d) ∈ℝ, with Jacobian equal to J_x=dX/dβ_c,d=d/dβ_c,d (log(β_c,d)-log(1-β_c,d)) = 1/β_c,d(1-β_c,d). The inverse transformation is given by β_c,d=1/1+exp(-x). Next, the logarithm of conditional distribution is lpost= (a_d^β - 1)log (β_c,d) + (b_d^β - 1) log(1-β_c,d) - ∑_g=1^G [ (ϕ_j,g^* + y_c,g,d) log ( ϕ_j,g^* + μ_j,g^* β_c,d) - y_c,g,dlog(β_c,d)] . Therefore, the acceptance probability is given by α(β_new, β_old) = min(1, exp[lpost_new - lpost_old + log(β_new) + log(1-β_new) - log(β_old) - log(1-β_old) ] ). After the decision, we update the variance of the transformed variable X. This step is repeated for every c and d. §.§ Hyper-parameters r_j, s^2, h_j and m^2 §.§.§ Prior means r_j For each j, we have π(r_j|{} _d=1^D,μ_r,σ_r^2,s^2) ∝∏_d=1^D (|r_j, s^2)×(r_j|μ_r,σ_r^2) ∝exp[ -1/2s^2∑_d=1^D( r_j-) ^2 ] ×exp[ -1/2σ_r^2( r_j-μ_r) ^2 ]. Recall our calculation for in section <ref>. It can be noticed that the full conditional distribution for r_j is a normal distribution r_j|…∼(μ̂_r,σ̂_r^2), where σ̂_r^2 =( 1/σ_r^2+D/s^2)^-1, μ̂_r =μ_r/σ_r^2+∑_d=1^D/s^2/1/σ_r^2+D/s^2. §.§.§ Prior variance s^2 π(s^2|{} _j=1,d=1^J,D,η_1,η_2,)∝ ∏_j=1^J∏_d=1^D (|r_j, s^2)×(s^2|η_1,η_2) ∝ (s^2)^-JD/2exp[- 1/s^2×1/2∑_j=1^J∑_d=1^D( -r_j) ^2 ] × (s^2)^-η_1-1exp[ -η_2/s^2] , i.e., s^2|…∼( JD/2+η_1,η_2+1/2∑_j=1^J∑_d=1^D( -r_j) ^2) . §.§.§ Prior means h_j For each j, π(h_j|{} _d=1^D,μ_h,σ_h^2,m^2) ∝∏_d=1^D (|h_j, m^2)×(h_j|μ_h,σ_h^2) ∝exp[ -1/2m^2∑_d=1^D( h_j-log( ) ) ^2 ] ×exp[ -1/2σ_h^2( h_j-μ_h) ^2 ]. Similar to r_j, its distribution is a normal distribution h_j|…∼(μ̂_h,σ̂_h^2), where σ̂_h^2 =( 1/σ_h^2+D/m^2)^-1, μ̂_r =μ_h/σ_h^2+∑_d=1^Dlog()/m^2/1/σ_h^2+D/m^2. §.§.§ Prior variance m^2 π(m^2|{} _j=1,d=1^J,D,κ_1,κ_2,)∝ ∏_j=1^J∏_d=1^D (|h_j, m^2)×(m^2|κ_1,κ_2) ∝ (m^2)^-JD/2exp[- 1/m^2×1/2∑_j=1^J∑_d=1^D( log()-h_j) ^2 ] × (m^2)^-κ_1-1exp[ -κ_2/m^2], i.e., m^2|…∼( JD/2+κ_1,κ_2+1/2∑_j=1^J∑_d=1^D( log()-h_j) ^2) . §.§ Variation of information Let and denote the true clustering and an estimate of the clustering, each consisting of k and k' clusters. Define C_i (i=1,…,k) to be the set of observation indices for cluster i under , and Ĉ_j (j=1,…, k') is for cluster j under . The number of data points shared between C_i and Ĉ_j is n_ij=|C_i ∩Ĉ_j|, and the size of each cluster is n_i+=∑_j=1^k'n_ij under , and n_+j=∑_i=1^kn_ij under . The entropy H() of a clustering represents its uncertainty in assigning observations to clusters, and the mutual information I(,) between two clusterings measures the reduction in the uncertainty of the allocation of a data point in if we know clustering . They are defined as H() =-∑_i=1^kn_i+/Nlogn_i+/N, I(,) =∑_i=1^k∑_j=1^k'n_ij/Nlogn_ijN/n_i+n_+j, where N is the total number of data points. Variation of information is defined as VI(,)=H()+H()-2I(,). The optimal clustering ^* is ^*=min_𝔼[ VI(,)|𝒟], where 𝒟 is the data. §.§ Adjusted rand index Following the notation in the last section, ARI between two clustering and is ARI = ∑_ijn_ij2 - [∑_i n_i+2∑_j n_+j2]/N2/1/2[∑_i n_i+2 + ∑_j n_+j2] - [∑_i n_i+2∑_j n_+j2]/N/2. Values closer to 1 indicate better agreement between and . §.§ Consensus clustering The technique is fairly applicable to existing Bayesian clustering methods that does not require any subtle redevelopment of the original method. It has two important parameters, ensemble depth W (number of chains) and ensemble depth D (number iterations in each chain). Usually D is a relatively small number. For a given W and D, a Bayesian clustering method is applied to run W chains of D iterations in parallel. This can reduce a large amount of time compared with running few chains of long iterations. Then the D-th sample in each chain is combined to produce a consensus matrix M, just like a posterior similarity matrix, where the element in position (i,j) is the proportion of W samples where two observations i and j are grouped together. Using the consensus matrix M, a point estimate can be obtained such as from VI. <cit.> propose a heuristic method to choose W and D. The rational is that, increasing W and D may improve the results to a large extent in the beginning, but the improvement will become smaller and smaller as their values increase, and finally converges. This is similar to PCA where more variance will always be captured for more principal components, but the gain in variance will be smaller and smaller, and eventually we will have few returns. Given a set of candidate parameters D'=(d_1,…,d_I) and W'=(w_1,…,w_J), for each w_j, we compute the consensus matrix based on the samples at the d_i-th iteration from w_j chains, and also compute M for the d_(i-1)-th iteration from w_j chains. The mean absolute difference (MAD) between the two matrices is a measurement of how stable the clustering partition is. Plotting these values as a function of D, we are likely to see an elbow-shaped curve, and we can choose a suitable D at which the curve plateaus. To choose W, we can fix D and compute MAD between w_(j-1) and w_j. § SIMULATION STUDY In this section, we conduct simulation studies to assess the covariate-dependent HDP model, and inference is performed as stated in Section <ref>. The main interest is the posterior inference of clusters, the time-dependent probabilities of belonging to each cluster, and cluster-specific parameters. §.§ Simulation setup Each of the two datasets has 120 cells (C_1=C_2=120) with G=10 genes, and consists of J=2 clusters. For each observation, z_c,d is first simulated from its categorical distribution, given p_j,d^J(t_c,d), and then latent counts and observed counts are simulated from the model. Specifically, t_c,d are simulated from (0,1), y_c,g,d|y_c,g,d^0, β_c,d ( y^0_c,g,d, β_c,d), y_c,g,d^0|z_c,d = j, μ_j,g^*, ϕ_j,g^* (μ_j,g^*, ϕ_j,g^*), z_c,d| p_1,d^J(t_c,d), …, p_J,d^J(t_c,d) (p_1,d^J(t_c,d),…, p_J,d^J(t_c,d)), where μ_j,g^*|j=1 (1,α_μ^2), μ_j,g^*|j=2 (3,α_μ^2), and ϕ_j,g^* is simulated from its prior. For each dataset, we set d=1: (t_1,1^*,t_2,1^*) =(0.4,0.9), (σ_1,1^*,σ_2,1^*)=(0.08,0.15), (q_1,1,q_2,1)=(0.5,0.5), d=2: (t_1,2^*,t_2,2^*) =(0.8,0.3), (σ_1,2^*,σ_2,2^*)=(0.1,0.1), (q_1,2,q_2,2)=(0.3,0.7). In addition, we assume β_c,d=0.6 for all cells, b_0=0.25, b_1=0.5, and α_μ=α_ϕ=0.1. We visualize each dataset in two ways, either by clusters, or by latent time. Figure <ref> shows the heatmaps of the simulated datasets on the log scale (after adding a value of 1). The clusters seem to be almost distinguishable from the latent time. Additionally, the time-dependent probabilities p_j,d^J against t for two datasets are given in Figure <ref>, indicating that the allocation is fairly decisive except for t around 0.6. Note that due to the non-identifiability of β_c,d, we give an informative prior with mean equal to that of the simulated values (0.6). Otherwise, using 0.06 suggested by <cit.> will lead to samples mainly around 0.06. §.§ Simulation results With J=4, we run the Gibbs sampling algorithm for 20000 iterations, and apply a burn-in of 10000 iterations, followed by a thinning of 5. Given posterior samples of allocations, the optimal clustering is computed by minimizing the posterior expectation of variation of information <cit.>. The optimal clustering from variation of information (VI) is compared to true clustering results by adjusted rand index (ARI). The details of VI and ARI are given in Section <ref> and <ref>. The results are summarized below. * Optimal clustering: the optimal clustering has an ARI of 0.9833, suggesting the result is very close to the truth. The posterior similarity matrix (PSM) is shown in Figure <ref>, where the uncertainty in allocations is quite low. Each entry in PSM represents the posterior probability that two cells are allocated to the same cluster. * Mean-dispersion relationship: Figure <ref> shows the result is reasonable with true relationship covered by the posterior samples. * Capture efficiency: Figure <ref> shows that, for most cells, the 95% credible intervals (CIs) contain the true values, and we notice that the widths of CIs may be large for some observations, probably as a result of the identifiability issue. As for the cluster-specific parameters and time-dependent probabilities, due to the label switching problem <cit.>, we cannot use the posterior samples for inference directly. Instead, we re-run our algorithm with the allocations z_c,d fixed at the optimal clustering from VI. The same length, burn-in and thinning are used in this post-processing step. * Cluster-specific mean and dispersion parameters: Figure <ref> compares the parameters for each gene. The sampled μ_j,g^* are very distinct between two clusters. Although for g=1,5,6, the true values for one cluster are not contained in the 95% CIs, probably because μ_j,g^* is not identifiable from the likelihood, they all lie within the 99% CIs. As for ϕ_j,g^* (Figure <ref>), the differences between clusters are less evident, with gene 5 having the largest overlaps. Unlike μ_j,g^*, 95% CIs contain the true values for all ϕ_j,g^*. * Time-dependent probabilities: Figure <ref> shows the relationship between p_j,d^J and t_c,d is well recovered, except that the uncertainty may be too small around the boundary of two clusters (t≈0.6). This is likely due to that the allocation variables are fixed and therefore the uncertainty can be smaller than expected. It is worth noting that, the samples for kernel parameters and exhibit clear trends in the traceplots, and that the chain may not even visit the true values at all (Figure <ref>). Despite of this, p_j,d^J(t_c,d) is well estimated, suggesting the kernel parameters may not be uniquely identifiable. In addition to time-dependent probabilities, we also compute the mean of the latent count for a cell c, conditional on time, mean expressions, and kernel parameters: 𝔼(y_c,g,d^0|t_c,d=t,, ^*, ^*)= ∑_j=1^Jp_j,d^J (t) μ_j,g^*. Figure <ref> shows the relationship between the mean of the latent count and time, under each gene in dataset 1. The true relationship is well recovered by the MCMC samples, and we can see a clear increase in the latent count as time increases. The result for dataset 2 is also adequate (Figure <ref>). § ADDITIONAL RESULTS FOR PAX6 §.§ Local marker genes With regards to local DE genes, these genes are specific to each cluster only and are identified based on the minimum posterior tail probability. For cluster j, compute P_g,j^* = min_j' ≠ j P_g (j,j'), and genes with P_g,j^* greater than a threshold (calibrated according to EFDR) are identified as local DE genes. Intuitively, local DE genes can distinguish cluster j from any other cluster j'. Local DD genes are detected in a similar way, based on the dispersion parameters to compute minimum tail probabilities L_g,j^*. In terms of local marker genes, the thresholds for local DE and DD genes are set to 1.2 and 1.6, respectively. Figure <ref> shows the minimum posterior tail probabilities against the mean absolute LFCs for each cluster, along with the number of local genes in each cluster. Cluster 3 has the largest number of local DE genes, whereas the numbers of local DD genes are more evenly spread across 10 clusters. Figure <ref> displays the estimated mean expressions and dispersions for local marker genes in each cluster. Similar to global DE genes, the local DE genes in cluster 3 exhibit higher mean expression levels. Moreover, for local DE genes in the other clusters, their mean expression levels are also higher in cluster 3. This trend is also observed for local DD genes in cluster 3. §.§ Posterior predictive checks We follow <cit.> and conduct posterior predictive checks based on a single replicate and then multiple replicates, given fixed optimal clustering. §.§.§ A single replicate dataset For posterior predictive checks, we employ mixed predictive distribution <cit.> based on the post-processing step. In particular, we simulate one replicate dataset using a single posterior sample of mean expression parameters μ_j,g^*, capture efficiencies β_c,d, and generate dispersion parameters ϕ_j,g^* from its hyper-priors given samples of , α_ϕ^2, μ_j,g^*. For the observed and replicate data, we compute the following statistics for each gene in each dataset: the mean of log-shifted counts across cells, i.e., log(y+1), the standard deviation of log-shifted counts, the logarithm of mean counts, and the dropout probabilities, i.e., proportion of cells with zero counts. The average in these statistics is taken across cells. We then compare the relationships for two pairs of statistics between the true and replicate datasets: 1. the mean and standard deviation of log-shifted counts; 2. the logarithm of mean counts and dropout probabilities. Furthermore, we investigate the point-wise differences between the true and replicate datasets in terms of the mean of log-shifted counts, standard deviation of log-shifted counts and dropout probabilities. Figure <ref> and Figure <ref> demonstrate that the simulated replicate data exhibits similar relationships between pairwise statistics as observed in the true data. Additionally, the point-wise differences in statistics are nearly negligible, implying that the replicate data is reasonable and consistent with the observed one. §.§.§ Multiple replicates For multiple replicates, we generate 200 datasets and compare the kernel density estimation of the statistics between the replicates and true data. From Figure <ref>, it is observed that estimated kernel is similar between the simulated 200 datasets and the true observed data. Moreover, we compute the posterior predictive p-values (ppp) <cit.> using three different discrepancy measures D_l(·), l=1,2,3 for each gene g in each dataset d: ppp_l(_·,g,d) = Pr{ D_l(_·,g,d^rep, _post ) ≥ D_l(_·,g,d, _post) |} ≈1/T∑_t=1^T { D_l(_·,g,d^rep, (t), _post^(t) ) ≥ D_l(_·,g,d, _post^(t) ) }, where _post denotes the posterior samples for the mean parameters μ_j,g^*, capture efficiencies β_c,d, hyper-parameters and α_ϕ^2. The superscript (t) indicates a particular sample, and _·,g,d^rep, (t) corresponds to the vector of counts for all cells in one replicate dataset d under gene g, generated using _post^(t). The three discrepancy measures are defined below. D_1(_·,d,g, ) = ∑_c=1^C_d(y_c,g,d - [y_c,g,d | ])^2/ [y_c,g,d | ], D_2(_·,d,g, ) = ∑_c=1^C_d( √(y_c,g,d) - √( [y_c,g,d | ]))^2, D_3(_·,d,g, ) = 1/C_d∑_c=1^C_d |(y_c,g,d = 0) - Pr(y_c,g,d=0 | )|. The first discrepancy measure is based on the χ^2 statistic <cit.>, and the second one is based on the Freeman-Tukey statistic <cit.> that is less sensitive to small expected values. Both measures have been used for count data. The third measure is relevant to the dropout probability. The computed ppp values are expected to be uniformly distributed when the model is true, but they have been shown to be conservative with a dome shape and concentrated around 0.5 <cit.>. It is important to note that the data is used twice in the computation of ppp values: first to update the prior and obtain posterior distributions for the parameters, and then to assess the adequacy of the model based on these parameters <cit.>. Nevertheless, p-values close to 0 or 1 still indicate a lack of fit to the data <cit.> and <cit.> emphasizes that deviations from a uniform distribution should not be a concern if the goal of model testing is to uncover discrepancies between the data and the fitted model. From Figure <ref>, it is evident that all three discrepancy measures exhibit extremely small and large p-values, suggesting an inadequate fit for some of the genes, even though the distributions seems symmetric. The p-values obtained from the first and third discrepancy measure D_1, based on the χ^2 statistic and dropout probabilities, appear to be most similar to the uniform distribution, when restricted to the middle range. On the other hand, the p-values for D_2 indicate a strong disagreement between the data and model. Additionally, we also investigate the three discrepancy measures conditioned on each cluster <cit.>. For instance, the first discrepancy measure for a specific cluster j (j=1,…,10) is given by D_1^(j)(_·,d,g, ) = ∑_c: z̃_c,d=j(y_c,g,d - [y_c,g,d | ])^2/ [y_c,g,d | ], where z̃_c,d is the optimal clustering result from VI. Figure <ref>, Figure <ref> and Figure <ref> display the histograms of ppp values for each discrepancy measure, conditioned on the optimal clustering and dataset. It is noteworthy that the extremely small ppp values disappear for all three measures, while values close to 1 still exist. We also observe that the ppp values for cluster 3 are most similar to a uniform distribution, regardless of the discrepancy measure and dataset, suggesting a reasonable fit for this particular cluster. It should be emphasized that cluster 3 has been reported to have the highest number of local DE genes, and a higher mean expression level for global DE genes. Apart from that, compared to the ppp values without conditioning on the optimal clustering, the p-values for some clusters, e.g., cluster 6 in HOM, tend to concentrate around 0.5 (ignoring values close to 1), aligning with the arguments presented in <cit.> and <cit.> (see discussion above).
http://arxiv.org/abs/2407.03275v1
20240703170059
Policy-guided Monte Carlo on general state spaces: Application to glass-forming mixtures
[ "Leonardo Galliano", "Riccardo Rende", "Daniele Coslovich" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech" ]
calc shapes.geometric matrix,positioning
http://arxiv.org/abs/2407.02670v1
20240702211736
Adversarial Magnification to Deceive Deepfake Detection through Super Resolution
[ "Davide Alessandro Coccomini", "Roberto Caldelli", "Giuseppe Amato", "Fabrizio Falchi", "Claudio Gennaro" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Adversarial Magnification to Deceive Deepfake Detection through SR D. Coccomini et al. ISTI-CNR, Pisa, Italy University of Pisa, Pisa, Italy CNIT, Florence, Italy Mercatorum University, Rome, Italy davidealessandro.coccomini@isti.cnr.it, roberto.caldelli@unifi.it, giuseppe.amato@isti.cnr.it, fabrizio.falchi@isti.cnr.it, claudio.gennaro@isti.cnr.it Adversarial Magnification to Deceive Deepfake Detection through Super Resolution Davide Alessandro Coccomini1,20000-0002-0755-6154 Roberto Caldelli3,40000-0003-3471-1196 Giuseppe Amato10000-0003-0171-4315 Fabrizio Falchi10000-0001-6258-5313 Claudio Gennaro10000-0002-3715-149X July 8, 2024 ======================================================================================================================================================================================================= § ABSTRACT Deepfake technology is rapidly advancing, posing significant challenges to the detection of manipulated media content. Parallel to that, some adversarial attack techniques have been developed to fool the deepfake detectors and make deepfakes even more difficult to be detected. This paper explores the application of super resolution techniques as a possible adversarial attack in deepfake detection. Through our experiments, we demonstrate that minimal changes made by these methods in the visual appearance of images can have a profound impact on the performance of deepfake detection systems. We propose a novel attack using super resolution as a quick, black-box and effective method to camouflage fake images and/or generate false alarms on pristine images. Our results indicate that the usage of super resolution can significantly impair the accuracy of deepfake detectors, thereby highlighting the vulnerability of such systems to adversarial attacks. The code to reproduce our experiments is available at: <https://github.com/davide-coccomini/Adversarial-Magnification-to-Deceive-Deepfake-Detection-through-Super-Resolution> § INTRODUCTION Manipulating content to spread misinformation and damage the reputation of people has never been easier than nowadays. We are witnessing the unstoppable evolution of those known as Deepfakes. These are counterfeit media contents which often show people saying or doing things they never actually said or did, distorting reality. Distinguishing pristine contents from manipulated ones is extremely difficult. For this reason, various deepfake detectors have been developed. These are, however, subject to various issues such as the need to be up-to-date to keep up with the latest deepfake generation methods or the ability to handle real-world situations. It is precisely in real-world contexts that deepfake detection systems could be faced with targeted attacks made to deceive them. Known as adversarial attacks, these are techniques that introduce noise or adversarial patches, specifically crafted to deceive the detector. Although they can also be very effective, these techniques may require deep knowledge of the deepfake detector they are trying to fool. In this paper, we attempt to exploit a Super Resolution (SR) technique, to camouflage deepfake images in a quick and black-box manner (in the sense that the attack is model-agnostic). Our approach allows us to cause a significant increase in the False Negative Rate (fake samples classified as pristine) of up to 18%. We also have shown how the usage of SR on pristine images can cause a drastic increase in false alarms of up to 14%, highlighting the inadequacy of some deepfake detectors, which will probably arise as these techniques continue to proliferate. § RELATED WORKS §.§ Deepfake Generation and Detection The generation of deepfakes involves the use of techniques that manipulate human faces to achieve realistic alterations in appearance or identity. Two primary approaches are commonly employed: Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs). VAE-based methods utilize encoder-decoder pairs to decompose and recompose distinct faces. On the other hand, GAN-based methods use a discriminator to distinguish real and fake images, paired with a generator that creates fake faces to fool the discriminator. Notable Deepfake generation methods include Face2Face<cit.> and FaceSwap<cit.>. As deepfakes becomes more credible, there is a growing demand for systems capable of detecting them. To address this problem various deepfake detectors have been developed. Some methods are capable of analyzing deepfake videos by considering also the temporal information<cit.> but most approaches focus on frame-based classification, evaluating each video frame individually<cit.> and being available to manage also simply still deepfake images. Also, competitions such as <cit.> and <cit.> have been organized to stimulate the resolution of this task. The problem of deepfakes has also been extended to the detection of synthetic images in general such in <cit.> increasing the variety of fake contents. §.§ Adversarial Attacks Adversarial attacks, such as noise addition and adversarial patches, exploit vulnerabilities in deepfake detectors to deceive them. Adversarial noise introduces subtle perturbations, while adversarial patches overlap patterns to trigger misclassification. The authors of <cit.> propose a framework called FakeRetouch, which aims to reduce artifacts in deepfake images without sacrificing image quality. By adding noise and using deep image filtering, they achieve high fidelity to the original deepfake images reducing the accuracy of deepfake detectors. In <cit.> the authors propose a statistical consistency attack (StatAttack) against deepfake detectors by minimizing the statistical differences between natural and deepfake images through the addition of statistical-sensitive degradations. §.§ Super Resolution Super Resolution (SR) is a technique which aims to reconstruct a high-resolution version of a low-resolution image by utilizing information from multiple input images<cit.> or by using prior knowledge about the relationship between high-resolution and low-resolution image pairs<cit.>. One of the main SR techniques is the one proposed in <cit.> where an Enhanced Deep Super Resolution network (EDSR) is presented; it introduces some improvements to the ResNet architecture for SR previously proposed in <cit.>. They remove batch normalization layers to increase flexibility and reduce memory usage. They also propose the use of residual scaling layers to stabilize the training procedure. The model constructed with these modifications, and pre-trained with a lower upscaling factor was able to achieve good results in terms of convergence speed and final performance. § THE PROPOSED ATTACK The proposed attack consists of exploiting SR techniques to modify a deepfake image and camouflage it in the eyes of a deepfake detector. The scope of the attack is then to mislead the deepfake detector and make the false negative rate increase. The SR process, in an attempt to improve the resolution of an image, could smooth the artifacts introduced by some deepfake generation techniques, thus undermining the learning performed by the deepfake detection model. Figure <ref> shows the proposed framework for implementing the SR attack. Specifically, for each of the frames of a video (or for each image if the attack is applied to a still image) to be analyzed, a pretrained face detector (e.g., MTCNN<cit.>) is applied. This step has been added to the pipeline for two main reasons. The first motivation is related to the scope of the attack itself since an attacker wants to manipulate a minimal part of the image in order to avoid adding artifacts when not needed. Applying the SR on the whole frame may add artifacts on the background finishing to have the inverse effect. The second reason behind the usage of a face detector is the common practice of both deepfake detectors and generators to focus only on the face and so it is very likely that the deepfake detector against which the attack is applied will only focus on the face and that the artifacts to be removed are concentrated on the face. The face extracted from the network has a specific resolution which is dependent on factors such as video resolution, distance from the camera, etc. Since the goal of SR is to raise the resolution of the image by a factor K ∈ℕ, the image is firstly down-scaled by a factor 1/K and then given as input to an SR model (e.g. EDSR<cit.>) to be SR up-scaled by a factor K. The face image resulting from this process has the same size as the original detected one and so can be again put inside the source image from which it has been detected. So, to apply this method there is no need to know anything about the deepfake detector that will be used for the final detection, then the proposed method can be effectively considered a black-box attack and can be applied against any deepfake detector and on images manipulated with any deepfake generation method. Furthermore, this attack can also be carried out on deepfake content already generated and does not need to be integrated into the deepfake creation procedure. § EXPERIMENTS §.§ Dataset Since we want to evaluate our attack on a variety of deepfake generation methods, we chose the well-known FaceForensics++ (FF++)<cit.> dataset for our experiments. The dataset consists of both pristine and manipulated videos created using various deepfake generation methods, namely Deepfakes<cit.>, Face2Face<cit.>, FaceShifter<cit.>, FaceSwap<cit.>, and NeuralTextures<cit.>. However, as this dataset consists of videos and the proposed attack exploits single-image SR, ten frames were randomly extracted for each of them on which face detection was then carried out. A training set and a test set were created for each deepfake generation method in FF++. Each training set consists of 14400 images, half of which were manipulated with one of the available methods. Each test set consists of 2800 images, half of which are manipulated again with the proposed attack. A total of five training and test sets are therefore available and all of them are perfectly balanced between the two classes (pristine and fake). To choose which videos should be used for training or test set we used the split made available in <cit.>. §.§ Experimental Setup To investigate the impact of the application of SR on the performance of deepfake detectors, we selected three architectures, namely Resnet50, Swin-Small and XceptionNet, and trained them on faces extracted from FF++ to perform a binary classification by pristine/fake image. For each training, the model only sees pristine images and fake ones manipulated with one of the available FF++ methods (SR is not applied). All models are pretrained on ImageNet and were fine-tuned with a learning rate of 0.01 for 30 epochs on an Nvidia Tesla T4. The test is carried out considering two different setups, in the first the models are tested by applying the SR-attack on both fake and pristine images. In the second, the pristine images are un-attacked and only the fake ones are passed through the SR process. The face is always extracted from each frame using a pretrained MTCNN<cit.>. The scale factor used in our experiments is K=2 and so the extracted face is resized of a factor 1/K and then up-scaled through EDSR<cit.> restoring the original resolution. After this process, the face can be re-pasted to the frame exploiting the coordinates extracted during face detection. § RESULTS §.§ Impact of Super Resolution on Deepfake Detection Table <ref> shows the results obtained from the deep learning models considered to perform the deepfake detection task on the FF++ test set, with and without the usage of SR on both fake and pristine images. Observing the accuracy results on all the methods considered, the application of SR leads to a relevant drop in performance in all cases, confirming the hypothesis that the SR process can generate confusion in the deepfake detectors, thereby leading them to make errors. More in detail, looking at the False Negative Rate (FNR) and False Positive Rate (FPR) all the models seem to have a peak when the SR attack is applied. When the deepfake generation method used on the image is Deepfakes or NeuralTextures, the impact on the FNR is less evident but the same detector that results in more robust on the fake images, fails on the pristine images attacked with SR and we see a huge increase in the FPR. The situation is exactly the opposite for the methods Face2Face, FaceSwap and FaceShifter on which the models seem to be more sensible on the fake images attacked with SR and so have an important increase on FNR while a slight swing in FPR is registered. Increasing the FNR is the main interest for an attacker as it can be useful to be able to camouflage fake images against an automatic system that may be trying to filter them out. Vice versa, the increase in the FPR in some cases, highlights a serious problem in deepfake detection systems that, if SR became more widespread (e.g. on social media to improve the final visual quality), would end up confusing legitimate images for deepfakes and also open the door for an attacker to deliberately raise false alarms in the system. That the use of SR pushes all Deepfake Detection models into error is also shown in Figure <ref> where it can be seen that in all cases, the AUCs obtained by the models on SR images are drastically lower (dashed lines) than their counterpart tested on images on which the SR process has not been applied (solid lines). To evaluate deepfake detectors in a realistic context, an alternative test set was considered in which pristine images are not subjected to the SR process. In fact, an attacker has much more interest in generating false negatives than false positives, so as to go undetected by automated systems. As can be seen from the experiments reported in Table <ref> in this setup the accuracy decreases, though more slightly, in almost all the cases with some deepfake generation methods on which the detectors are more robust to the attack. More in detail, the Face2Face, FaceSwap and FaceShifter images enhanced with the SR attack, are very difficult to detect, probably because the artifacts which the detector has learnt to recognize during the training process, are hidden by the SR process and this is translated in an higher FNR and a lower Recall value. In all the cases, the FPR is not affected by the usage of the SR attack since the pristine images are not attacked in this setup. §.§ Visual Impact Analysis When performing an SR attack on a fake image, it is important that it remains as indistinguishable to human eyes as possible so as to preserve its meaning but also to make it less suspicious to users. To assess the impact of our attack on the image appearance, we compared the similarity of each image pair (non-SR, SR) through two commonly used quality metrics, Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). The SSIM is calculated as SSIM(x, y) = (2μ_xμ_y + C_1)(2σ_xy + C_2)/(μ_x^2 + μ_y^2 + C_1)(σ_x^2 + σ_y^2 + C_2), where x and y are the two compared images, μ_x and μ_y are the average values of x and y respectively, σ_x and σ_y are the standard deviations, σ_xy is the covariance between x and y and C_1 and C_2 are two constants used for stability. To calculate the PSNR we used the formula PSNR(x, y) = 10 ·log_10(MAX^2/MSE(x, y)), where x and y are the two compared images, MAX is the maximum possible pixel value of the images and MSE(x, y) is the Mean Squared Error between the images. The values obtained from each image pair were used to calculate the mean to see the similarity between images attacked with and without SR for each category. As can be seen from Table <ref> the similarity between the SR images and the non-SR ones is very high, with SSIM values around 0.97 and PSNR around 40dB meaning a strong similarity and minimal changes brought by the SR process. We also checked if exists a correlation between the SSIM value and the variation in the error of the classifiers. In other words, we explored if a lower SSIM value is related to a higher number of misclassifications during the detection. From our experiments, in all the methods the correlation is lower than ±0.1 meaning that the variation in detectors' performances is more related to the type of changes done to the image and not to the quantity of these. §.§ Qualitative Evaluation To better understand the effect of the SR Attack on images, we visually analyzed some examples of deepfakes (e.g. Face2Face and FaceSwap) correctly detected by a Resnet50-based detector before the application of the attack but misclassified after it. These methods tend to introduce rather specific artifacts that, as visible in Figure <ref>, are then smoothed by the SR. This makes the work of the deepfake detector more difficult, as it has learnt to recognize such anomalies. As can be seen from the figure also the visual difference is minimal, as already stated by the analysis conducted in Section <ref>, but it is enough to make some artifacts around the mouth (FaceSwap) or on the nose (Face2Face) to disappear. § CONCLUSIONS In this work, we examined the impact of applying SR on deepfake images in the context of deepfake detection. According to our experiments, the use of these techniques has a huge impact on the performance of deepfake detectors, causing the FNR to be drastically raised depending on the deepfake generation technique used and the artifacts introduced by it into the image. Also, a tendency was observed for deepfake detectors trained on specific deepfake generation methods to mistake pristine SR images for fake images when the SR attack is applied, causing the FPR to rise dramatically. In conclusion, SR attack can become an effective black-box attack in deepfake detection. In future work, we will explore the impact of detected face resolution on the attack performance, explore more SR techniques and also see if using SR as a data augmentation during the training process could be effective to make detectors robust to this attack. § ACKNOWLEDGMENTS This work was partially supported by the project SERICS (PE00000014) under the NRRP MUR program funded by the EU - NGEU and by the H2020 project AI4Media (GA n. 951911). splncs04
http://arxiv.org/abs/2407.02437v1
20240702171512
Parameter Matching Attack: Enhancing Practical Applicability of Availability Attacks
[ "Yu Zhe", "Jun Sakuma" ]
cs.LG
[ "cs.LG", "cs.CR", "cs.CV" ]
Asymmetries in the simulated ozone distribution on TRAPPIST-1e due to orography Gregory Cooke =============================================================================== § ABSTRACT The widespread use of personal data for training machine learning models raises significant privacy concerns, as individuals have limited control over how their public data is subsequently utilized. Availability attacks have emerged as a means for data owners to safeguard their data by desning imperceptible perturbations that degrade model performance when incorporated into training datasets. However, existing availability attacks exhibit limitations in practical applicability, particularly when only a portion of the data can be perturbed. To address this challenge, we propose a novel availability attack approach termed Parameter Matching Attack (PMA). PMA is the first availability attack that works when only a portion of data can be perturbed. PMA optimizes perturbations so that when the model is trained on a mixture of clean and perturbed data, the resulting model will approach a model designed to perform poorly. Experimental results across four datasets demonstrate that PMA outperforms existing methods, achieving significant model performance degradation when a part of the training data is perturbed. Our code is available in the supplementary. § INTRODUCTION Large amounts of data are needed to build high-quality machine learning models, and the use of personal information collected from individuals is often essential for model training. Individuals often make their own personal data available to the public, and such data may be used unintentionally to train machine learning models. Once data is released, individuals who provide personal information have no way to control its subsequent use, which raises significant privacy concerns <cit.>. Consider a situation of posting photos on social media as an example. On one hand, users might want to publish personal images, such as facial photos, on social networking sites but do not want to let them be used to train machine learning models. On the other hand, data exploiters might want to collect publicly available data from social media and use them to train their machine learning models. Thus, there is a conflict of interest between data owners and data exploiters. The availability attack has been developed as means for data owners to protect their own data by themselves. The availability attack aims to poison a dataset so that the model trained with the dataset would only achieve low performance without making major changes to the appearance of each data <cit.>. More specifically, the availability attack synthesizes imperceptible perturbations and adds them to the data. These perturbations are crafted to significantly reduce the performance of any model trained on data with these perturbations. Data owners can modify their data with such perturbations before releasing them publicly. Since these perturbations are invisible, they do not affect the normal use of these data by humans, whereas these data do not help model training, thus preventing undesired use of the data by unknown data exploiters. Various availability attacks have been proposed in the field of image classification <cit.>. These attacks are designed to significantly reduce the performance of the resulting models when the perturbations are added to all training data. TensorClog (TC) <cit.> aims to optimize poisoning perturbation by minimizing the gradient norm obtained from a pre-trained surrogate model. TC works under the assumption that when the gradient norm approaches zero, gradient vanishing occurs, thereby causing the model to learn negligible information from the data, resulting in poor performance on test data. Error-Minimizing (EM) <cit.> shares a similar idea, aiming to optimize the perturbation so that the classification error obtained from the surrogate model is minimized. EM expects the model to stop learning as the loss approaches zero. In <cit.>, the authors propose to generate linearly separable perturbations (LSP) as poisoning perturbations. This attack is designed under the hypothesis that these linearly separable perturbations can act as shortcuts; if the model learns such shortcuts, it will produce low test accuracy on clean test data. Overall, these methods are designed to introduce perturbations that make model training fail indirectly, such as injecting shortcuts or forcing gradients to vanish, so that the resulting model's performance is degraded. One limitation of these attacks is that these attacks are effective only when all of the training data can be perturbed. Empirical results from prior works <cit.> and this study (see Table <ref>) indicate that these attacks are ineffective when only a part of the training data is perturbed. Specifically, when training datasets contain as little as 20% clean data, the resulting model's performance closely resembles those trained on clean datasets, and these availability attacks lose their effect. This limitation restricts the practical utility of availability attacks. For instance, consider a social media platform where users share photos. A platform that wants to protect user privacy can adopt an availability attack method to all uploaded photos before they are made public. If the data exploiter obtains training data from such platforms, the availability attack takes effect, and the resulting model does not have a good performance. However, if the data exploiter collects data from multiple platforms and some of the platforms do not adopt the availability attack, the application rate of availability attack in the training data does not reach 100%. In this situation, the availability attacks will fail, and thus, the data exploiter will have a model with good performance, resulting in undesired use of personal data. Our challenge highlights the limitations in the practical application of existing availability attacks. In response to the limitations of existing availability attacks, this paper introduces a novel availability attack approach, termed Parameter Matching Attack (PMA). The primary goal of PMA is to enhance the practical applicability of availability attacks. Specifically, PMA aims to force the resulting model to perform poorly even when the data exploiter collects poison data generated by PMA and clean data from different sources. To achieve this goal, our approach hinges on two key steps: * Since previous work assumed that the data exploiter trains the model on perturbed data only, they did not consider the effect that clean data would have on perturbed data. In contrast, our algorithm for optimizing perturbations incorporates the effects caused by clean data. Thus, our perturbation can still be effective when the data exploiter trains the model with both clean and perturbed data. * In previous studies, perturbations injected for attacks have pursued indirect goals to degrade model performance, such as injecting shortcuts or forcing gradient vanishing, with the expectation that model performance would be degraded as a result. However, our empirical observation reveals that such a strategy does not work well when the exploiters utilize both clean and perturbed data. In contrast, we propose a more direct approach to achieving an availability attack. We first develop a model, referred to as the destination model, that intentionally performs poorly on clean test data. Subsequently, we optimize perturbations to align the resulting model closely with the destination model. This approach allows us to steer the resulting model directly toward the target of the availability attack, i.e., the poorly performing model. The data owner cannot specify from what data source the data exploiter collects data. Therefore, the perturbation used in the availability attack must be effective even if the data owner has no prior knowledge of what clean data the data exploiter will use for training. With this in mind, the proposed attack assume that (1) the data owner cannot have any knowledge about the clean data the data exploiter employs for model training, whereas (2) the data owner can have access to the underlying distribution of the data owner's clean data and obtain random draws of clean data from the distribution. Our empirical results demonstrate that the model performance in this strategy is equivalent to the case where the data owner has full knowledge of the clean data used by the data exploiter. To sum up, in this work, we present the first availability attack that remains effective even when only a portion of the data is perturbed. Our key contributions can be summarized as follows: * We introduce a novel availability attack approach, termed Parameter Matching Attack (PMA), which frames the attack as an optimization problem aimed at minimizing the distance between the parameters of the destination model and those obtained through training on the perturbed dataset. By setting the destination model appropriately, we demonstrate that the availability attack is effective even if the model is trained on a mixture of data that has been perturbed by the availability attack and clean data unknown to the data owner. To the best of our knowledge, PMA is the first availability attack that works when only a portion of data can be perturbed. * We propose an algorithm to solve the formulated optimization problem. The formulated optimization problem is bi-level: in the outer level, the perturbation is optimized so that the distance between the resulting model and the destination model is minimized; in the inner level, the model is trained by minimizing cross-entropy loss on a mixture of clean and perturbed data. To solve this problem, we alternatively optimize the outer minimization and inner minimization. * We demonstrate that our availability attack takes effect even if the clean data used by the data exploiter is not disclosed to the data owner. Through extensive experiments conducted on four datasets compared with six methods, we empirically show that our method is the only method that can degrade the resulting model performance by more than 30% when the poison ratio is not 100%. In addition, when the poison ratio is 100%, our method can achieve accuracy drops comparable to other methods. § BACKGROUND AND RELATED WORK §.§ Poisoning Attack Poisoning attacks involve injecting malicious data into training datasets to manipulate the behavior of resulting machine learning models. These attacks are typically categorized into three types: availability attack, integrity attack, and backdoor attack <cit.>. In the integrity attack, the attacker aims to cause misclassification of specific test samples by modifying the training set, while maintaining high accuracy on other test samples. Since the target is limited to specified test samples, the goal of the integrity attack is often achieved by perturbing a small portion of the data, typically less than 10% <cit.>. Conversely, the backdoor attack involves perturbing the training data so that when a specific trigger, such as an image patch, is present in test samples, they are classified into a predefined category. At the same time, the model is required to maintain high performance when the trigger is absent in test samples. Since the trigger can be specifically determined at training time, the backdoor attack often achieves its objective with a low poison ratio, e.g., less than 10%, where only a small portion of the training data is modified <cit.>. In this article, we focus on the availability attack, where the attacker seeks to degrade the overall test accuracy of the resulting model by perturbing the training data. Unlike the integrity attack, the availability attacks aim to cause misclassification of unspecified test samples. Also, unlike the backdoor attack, the availability attack does not add a predetermined trigger to the test samples. The goal of the availability attack, which is to degrade its performance for any test sample that follows the training data generation distribution, is more challenging than the backdoor and the integrity attack. Therefore, the existing attacks required to perturb the training dataset with 100% poisoning ratio to achieve this goal <cit.>. For a detailed comparison of these three attack types, refer to Table <ref>. §.§ Availability Attack In recent years, a variety of availability attack methods have emerged. When the assumption that perturbations can be added to all training samples is satisfied, these methods are reported to perform well <cit.>. In addition to the methods already described in Sec. <ref>, there are some studies proposes availability attacks through the use of surrogate models. Deep Confuse (DC) <cit.> proposes learning a generator capable of providing perturbations that maximize classification error over training trajectories of surrogate models. Targeted Adversarial Poisoning (TAP) <cit.> exploits a pre-trained classification model to generate targeted adversarial examples. They empirically find the model achieve low performance on clean test sample when the model is trained with these adversarial examples. Self-Ensemble Protection (SEP) <cit.> demonstrates that using multiple checkpoints during the training of a surrogate model, rather than a single surrogate model, yields better results for attacks requiring a surrogate model. In contrast, some attack methods do not require the use of a surrogate model. In <cit.>, the authors discovered that poisoning perturbations act as shortcuts by analyzing perturbations of existing availability attacks. Building on this insight, they design linearly-separable perturbations (LSP) as poisoning perturbations. Similarly, Autoregressive (AR) <cit.> employs an autoregressive process to generate poisoning perturbations acting as shortcuts. Again, we stress all existing availability attack assume that all of training samples can be modified with poisoning perturbation. As demonstrated below in Table <ref>, if we weaken the assumption of these works, which means only part of the training samples can be perturbed, these works cannot realize the goal of the availability attack. Our aim is to resolve this limitation and realize availability attacks when poisoning perturbations are added only to a part of the training data. To the best of our knowledge, the availability attack in this setting has never been considered. § METHOD §.§ Threat Model As in previous works on availability attacks <cit.>, our threat model defines two parties: the poisoner and the data exploiter. Poisoner's capability: The poisoner can add imperceptible perturbations to the data exploiter's training data. It's worth noting that in previous availability attacks <cit.>, the poisoner is able to add perturbations to all training data. We call this attack setting full availability. In this work, we weakened this ability. The poisoner can only add perturbations to part of the training data. We call this attack partial availability. Subsequently, the data exploiter trains a model from scratch on a mixture of perturbed and clean data by empirical risk minimization. The success of the poisoner is measured by the resulting model's accuracy on clean test images, with lower accuracy indicating greater success. Poisoner’s knowledge: The poisoner lacks knowledge of the data exploiter's model architecture, model initialization, and other training details. However, the poisoner is allowed to sample data from the underlying distribution. Using this, the poisoner can train a model locally, which can be used as a surrogate model of the data exploiter’s model. In the partial availability attack, the data exploiter’s training data consists of two parts: (1) poison data perturbed by the poisoner, and (2) clean data collected by the data exploiter on its own. In this work, we introduce two models for the poisoner's knowledge about the data exploiter's clean data. The first model assumes that the poisoner has full knowledge of the clean data used by the data exploiter. We call this setting the full knowledge model. In the full knowledge model setting, the poisoner can optimize perturbations with respect to all the clean data possessed by the data exploiter. This setting makes attack easier. However, it would be unrealistic to assume that the poisoner can access the data exploiter's clean data. In the second model, we suppose the poisoner cannot have any knowledge about the data exploiter’s clean data. Instead, the poisoner can sample data from the underlying distribution and use them as a substitute clean data. We call this setting the sampling oracle model. The attack in the full knowledge model is considered in Sec. <ref>. Then, we extend the proposed attack to the sampling oracle model in Sec. <ref>. §.§ Problem Setup Let 𝒳 be the sample space, and 𝒴 be the label space. Let data (x,y) follow an underlying distribution 𝒫 over 𝒳×𝒴. We consider the classification problem to predict y ∈𝒴 given x∈𝒳 where (x,y) ∼𝒫. The data exploiter's objective is to find a classifier F: 𝒳→𝒴 that minimize the loss function: θmin𝔼_(x,y) ∼𝒫ℒ(F(x; θ), y). Given a training dataset 𝒟_train={(x_i, y_i)}_i=1^N+M sampled from 𝒫, the data exploiter can obtain a classifier by minimizing the following objective function empirically: θ^*(𝒟_train) = θmin∑_(x,y) ∈𝒟_trainℒ(F(x; θ), y). In the partial availability attack, the poisoner is able to add perturbations δ={δ_i}_i=1^N to a part of data in 𝒟_train. Here, 𝒟_train consists of divided two parts 𝒟_train=𝒟_cl∪𝒟_poi where 𝒟_poi is poisoned by the poisoner whereas 𝒟_cl is remained as clean data. We denote the size of 𝒟_cl and 𝒟_poi by N and M, respectively. 𝒟_r denotes the dataset before poisoned by the poisoner, that is, 𝒟_poi= { (x + δ,y) | (x,y) ∈𝒟_r } where δ is a perturbation generated by the poisoner for the corresponding data each x. In the full knowledge model setting, the poisoner is able to access both 𝒟_cl and 𝒟_r. Thus, the poisoner can generate perturbation that minimizes the model's generalization performance by using both 𝒟_cl and 𝒟_r as follows. Here, the poisoner has no knowledge about the data exploiter’s classification model F. As the poisoner is allowed to draw samples from 𝒫, it trains a substitute model locally and uses it for attack. For easiness, we represent both the data exploiter’s classification model and its substitute by F. The poisoner optimizes δ with the following objective in this setting: max _δ𝔼_(x, y) ∼𝒫ℒ(F(x ; θ^*(𝒟_cl∪𝒟_poi )), y) 0.9s.t.θ^*(𝒟_cl∪𝒟_poi ) = θmin[∑_(x,y) ∈𝒟_clℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x+δ; θ), y) ] , In the sampling oracle model setting, the poisoner cannot access 𝒟_cl and therefore cannot use it to optimize δ. Instead, the poisoner samples new clean data 𝒟̃_cl from distribution 𝒫 and uses them as a substitute of 𝒟_cl. Hence, the poisoner optimizes δ with the following objective in this setting: max _δ𝔼_(x, y) ∼𝒫ℒ(F(x ; θ^*(𝒟̃_cl∪𝒟_poi )), y) 0.9s.t.θ^*(𝒟̃_cl∪𝒟_poi ) = θmin[ 𝔼_(x,y) ∼𝒫ℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x+δ; θ), y) ] , Consequently, the data exploiter trains its model F with dataset 𝒟_cl∪𝒟_poi using eq. <ref> after the poisoner generate 𝒟_poi using eq. <ref> or eq. <ref>. §.§ Our Strategy First, in order to simplify the discussion, the following discussion will focus on the sampling oracle model setting. We remark that the following discussion can immediately holds in the full knowledge model setting with a slight modification. <cit.> proposed attack methods for the full availability attack (i.e., 𝒟_cl=∅ ). A simple modification of the attack methods for the full availability attack would also provide a natural solution for the partial availability attack (i.e., 𝒟_cl≠∅ ). However, our empirical investigation revealed that a simple extension of the attack methods for the full availability attack could not deal with the partial availability attack. In the following, we introduce a high-level hypothesis to explain why the full availability attack methods do not work satisfactorily for the partial availability attack. Further details are presented in the supplementary. Suppose 𝒟_cl≠∅. On one hand, when optimizing eq. <ref>, the training procedure involves minimization of ∑_(x,y) ∼𝒫ℒ(F(x; θ), y). On the other hand, when optimizing eq. <ref>, it involves maximization of generalization loss with clean data, 𝔼_(x, y) ∼𝒫ℒ(F(x ; θ^*(𝒟̃_cl )), y). Since data are randomly taken from 𝒫 in both, the minimization of eq. <ref> and the maximization of eq. <ref> partially conflict. Such a conflict could make solving the problem inherently difficult. To overcome this limitation, our approach consists of two steps: (1) we introduce a novel and generalized formulation for the poison attack, which is also a bi-level optimization problem. We convert the maximization in eq. <ref> into a minimization of the distance between the model parameters of a destination model and those of the model trained on a mixture of clean and poison data. (2) By designing the destination model in this formulation, we resolve this conflict. Our formulation consists of two levels of optimization problems. More specifically, we modify the original maximization problem in eq. <ref> into a minimization of the distance between the model trained on a mixture of clean and poisoned samples and a destination model. The design of the destination model for availability attack is discussed in the latter half of this section. With this minimization, we expect the resulting model to perform closely to the destination model. Formally, we convert eq. <ref> and eq. <ref> into the following: min _δ d(θ^*(𝒟̃_cl∪𝒟_poi), θ^*_des) 0.9s.t.θ^*(𝒟̃_cl∪𝒟_poi ) = θmin[∑_(x,y) ∼𝒫ℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x+δ; θ), y) ] where d is a distance measurement. θ^*_des is the model parameter of the destination model. Design of θ^*_des: By setting θ^*_des as the model parameters of a model with low performance, the goal of the availability attack is expected to be attained. There exist various strategies to construct a θ^*_des with low performance, such as training a model with samples with incorrect labels or a model with random weights. We consider a design of the destination model that can resolve the conflict when minimizing ∑_(x,y) ∼𝒫ℒ(F(x; θ), y). 0.9θ^*_des= θ^*(𝒟̃_cl∪𝒟_dirty ) = θmin[∑_(x,y) ∼𝒫ℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x; θ), g(y) ) ] where g(y) is a label permutation function, for a K classes classification problem, g(y)=Mod(y+1,K). By using g(y), we construct a dirty label dataset 𝒟_dirty={ (x,g(y)) | (x,y) ∈𝒟_r }. Intuitively, a part of the training data for this destination model is intentionally mislabeled. We empirically confirm that such a destination model achieves low test accuracy when the proportion of dirty label data is greater than 40%. Hence the low-performance requirement is satisfied. Next, we explain the reason why the conflict in eq. <ref> and eq. <ref> is resolved with this destination model. We think since this design avoids minimizing and maximizing the generalization loss in the two optimization objectives, conflict can be avoided. §.§ Sub-goal: full knowledge model For the poisoner, it is difficult to have 𝒟_cl={(x_i, y_i)}_i=N+1^N+M. As a sub-goal, we first achieve our goal in the full knowledge setting by assuming that the poisoner can access 𝒟_cl={(x_i, y_i)}_i=N+1^N+M. Then, in the full knowledge setting, the eq. <ref>, eq. <ref> can be transformed into: min _δ d(θ^*(𝒟_cl∪𝒟_poi), θ^*_des) 0.8s.t.θ^*(𝒟_cl∪𝒟_poi ) = θmin[∑_(x,y) ∈𝒟_clℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x+δ; θ), y) ] Also, in the full knowledge setting, θ^*_des is constructed by: 0.8θ^*_des=θ^*(𝒟_cl∪𝒟_dirty ) = θmin[∑_(x,y) ∈𝒟_clℒ(F(x; θ), y)+∑_(x,y) ∈𝒟_rℒ(F(x; θ), g(y) ) ] Directly solving this optimization problem is difficult since each step of the perturbation update requires the computation of θ^*(𝒟_cl∪𝒟_poi). Getting this parameter requires a whole training process, which is prohibitively expensive. We, therefore, consider an approximate solution. During the model training, we alternatively optimize perturbation to reduce the distance between the two intermediate model parameters trained on 𝒟_cl∪𝒟_dirty and on 𝒟_cl∪𝒟_poi in each step, as well as training the two model by 𝒟_cl∪𝒟_poi or 𝒟_cl∪𝒟_dirty. Also, as mentioned in the Sec. <ref>, the perturbations need to be imperceptible, hence we bound the size of perturbations by the l_p norm. We add an additional constraint that δ_∞≤ϵ into the above formulation. This constraint is satisfied by the projected gradient minimization (line 8-11 in algorithm <ref>). Also, since the poisoner cannot know F, in the actual algorithm, we use a surrogate model F' to replace F. The detailed algorithm is shown in Algorithm <ref>: We follow <cit.> and use the normalized squared distance to measure the distance between the model parameters. d (θ_t, θ_t+1, θ'_t+1)=θ_t+1-θ'_t+1^2/θ_t+1-θ_t^2 As training proceeds, the model parameters change less and less, so using by θ_t+1-θ_t^2 for normalization, this loss can be encouraged to be effective even after a certain phase of training has been carried out. §.§ Final Goal: sampling oracle model Finally, we consider the more realistic scenario where the poisoner has no knowledge about the clean data being used by the data exploiter. In this situation, 𝒟_cl={(x_i, y_i)}_i=N+1^N+M is unknown during poisoning. Hence, the poisoner cannot obtain θ^*_t= θ^*(𝒟_cl∪𝒟_dirty). In this situation, the poisoner collect alternative data 𝒟̃_cl={(x_i, y_i)}_i=N+1^N+M' in the original data distribution in place of 𝒟_cl. The objective corresponding to this setting has been described by eq. <ref>, eq. <ref>, eq. <ref>. Then, we can still use Algirithm <ref> to solve this problem by replacing 𝒟_cl with 𝒟̃_cl. § EXPERIMENT §.§ Setup Datasets and models. We utilized four datasets to evaluate our proposal: SVHN <cit.>, CIFAR-10 <cit.>, CIFAR-100 <cit.>, and a 100-class subset of ImageNet [We use 20% of the first 100 class subset as the training set, follows <cit.>]<cit.>. If not mentioned specifically, We used a three-layer ConvNet to serve as the surrogate model for generating perturbations on the poisoner's side. On the data exploiter's side, we used ConvNet as the target model on SVHN, and used ResNet-18 <cit.> as the target model on CIFAR-10, CIFAR-100, and ImageNet. Simulate the full knowledge and sampling oracle settings As discussed in Sec. <ref>, we consider two different settings: full knowledge setting and sampling oracle setting. To simulate the full knowledge model setting, we split the dataset into two parts: a subset to be perturbed, which corresponding 𝒟_r in Sec. <ref>, and a subset of remaining clean data, which corresponding 𝒟_cl in Sec. <ref>. The poisoner ratio is varied as 40%, 60%, 80% and 100%. Then, we used our proposal in Sec. <ref> to generate a poison dataset and trained the model on a mixture of the poison dataset and the remaining clean data. To simulate the sampling oracle setting, we split the dataset into three parts: a subset to be perturbed, 𝒟_r; a subset treated as a clean dataset used by the data exploiters, 𝒟_cl, and a subset treated as a clean dataset collected by the poisoner, which corresponding 𝒟̃_cl in Sec. <ref>. Since in this situation, the poisoner has no way of knowing the amount of clean data collected by the data exploiter, we set the 𝒟_r and 𝒟̃_cl have the same number of samples. Then, we used our proposal in Sec. <ref> to generate a perturbed dataset with data to be perturbed and clean data for the poisoner, and then trained the model on a mixture of the perturbed dataset and clean data for data exploiter. Similar to the full knowledge setting, The poisoning ratio is varied as 40%, 60%, 80% and 100%. Depending on the poison ratio, we adjusted the number of samples in each subset accordingly. The detail of how it is split is described in the supplementary material. We compare our results with <cit.>. For competitive methods, we used these methods to poison a portion of training data, and trained the model on a mixture of the poisoned and remaining clean data. We evaluate our proposal and competitive methods by measuring the classification accuracy of trained models on clean test data. All results are averaged over ten runs. Training and perturbation setting We trained all models for 100 epochs. We used Adam with a training rate of 0.01. We experimentally confirmed that both our methods and all comparison methods did not attain sufficient performance in the partial availability attack when δ_∞=8/255, which is the perturbation size employed in the full available attack in previous works. This is because the partially available attack is more challenging than the fully available attack. For this reason, we employed a larger perturbation size, δ_∞=25/255, for our method and all comparison methods. §.§ Evaluation on four benchmark datasets We first compare our proposal with competitive methods on four benchmark datasets. Table <ref> shows the classification accuracy when the model is trained on a mixture of perturbed and clean data in the full knowledge model and oracle sampling model. Lower classification accuracy means better attack performance. The 10th row shows the classification accuracy of the designed destination model; its performance can be approximated to serve as a lower bound of the attack performance of our proposal, because our perturbations are designed so that the resulting model can have close performance to this model. We find when the poison ratio less than 40%, our destination model could not achieve very low performance. Hence, our proposal requires at least a 40% poison ratio. We show how the destination model performs with a lower poison ratio in the supplementary to confirm that the poison ratio must be larger than 40%. The 11th row (Ours-Full) in Table <ref> shows the attack performance of our proposal in the full knowledge model setting, which assumes the clean data used by the data exploiter is known for the poisoner. Compared to other methods, our approach results in at least a 20%, 10% and 5% greater decrease in the classification accuracy when the poison ratio is 80%, 60% and 40% respectively. This demonstrates the superiority of our proposal when the model is trained on a mixture of clean data and poison data. The 12th row (Ours-Oracle) Table <ref> shows our proposal in the sampling oracle setting, which assumes the clean data used by the data exploiter is unknown to the poisoner. Instead, the poisoner samples data from the same data generation distribution used as the surrogate. The performance achieved by our proposal method in this setting is close to those achieved in the full knowledge model setting. This indicates that our proposal can work well in the oracle sampling model, which is a more realistic setting. When the poison ratio is 100% (the last column in each dataset of Table <ref>), the model is trained only on the poison data. In this setting, our proposal achieves competitive results with previous methods. §.§ Evaluation across different model architectures Since it is difficult for the poisoner to know the target model, which is the model used by the data exploiter, in practice, the poisoner will often encounter situations where the structure of the surrogate model does not match that of the target model. We evaluate how our proposal performs in this situation in Table <ref> on CIFAR-10. In this experiment, we used three different surrogate models VGG16 <cit.>, ResNet-18 <cit.>, and a three-layer ConvNet to generate perturbations. We train the classification model by four different models VGG16, ResNet-18, ResNet-50 <cit.>, DenseNet-121 <cit.>. Then, we train models on a mixture of clean and poison data and evaluate the classification accuracy of these models. In this experiment, we followed the sampling oracle setting. Overall, in Table <ref>, we can observe that with all three surrogate models, there is a performance degradation of the different target models. This demonstrates our proposal can work well in a realistic situation where the structure of the surrogate model does not match that of the target model. We can find that when the surrogate model is ConvNet (the 4th row in Table <ref>), the proposed method achieves the highest performance degradation. Especially when the target model is DenseNet-121, the use of ConvNet exhibits a clear advantage over the other two surrogate models. We speculate that this is because with fewer model parameters, it is easier to reduce the parameter distance between models through optimization of perturbations. Also, for this reason, we recommend the use of a relatively simple model structure as the surrogate model in our proposal method. §.§ Evaluation with different perturbation size In order to check how large a perturbation size our method needs to be, we evaluate our proposal with different ℓ_∞-norm. We generate perturbations with ℓ_∞-norm as 8/255, 16/255, and 25/255, respectively. In this experiment, we followed the sampling oracle setting and evaluated our proposal under different ℓ_∞-norm. When perturbations are bounded by ℓ_∞-norm with 8/255 (3rd row in Table <ref>) or 16/266 (4th row in Table <ref>), our proposal does not work or achieve limited performance degradation. Also, <cit.> shows that existing works cannot realize partial availability attack when ℓ_∞-norm is 8/255 or 16/255. The 5th row in Table <ref> shows that our proposal works well when the perturbation size is 25/255. Hence, we use ℓ_∞-norm with 25/255 to bound the perturbations in this work. § LIMITATION AND POTENTIAL NEGATIVE IMPACT Limitations Our proposal has two limitations: (1) Compared with existing works <cit.>, our proposal requires a relatively large perturbation size. More specifically, previous methods work effectively with δ_∞= 8/255, while our method requires δ_∞=25/255. Hence, when comparing with previous works, we set the perturbation size of the previous work to 25/255 as well, so the superiority of our method over other methods is not due to the size of the perturbation. (2) Our proposal can not work when the poison ratio is smaller than 40%. This is due to the construction of the destination model. When the proportion of mislabeled data to the total data is less than 40%, the destination model has good performance, so even if the optimization perturbation makes the resulting model close to this model, the goal of the availability attack cannot be achieved. Negative impact While the proposal of this paper is initially designed to safeguard data privacy, its potential applicability extends to interfering with machine learning-based services by degrading their performance. However, it's important to note that the availability attack is relatively overt, as it targets unspecific test samples rather than fixed data points. Consequently, its potential negative impact is somewhat limited. § CONCLUSION In this paper, we propose a partial availability attack approach, termed Parameter Matching Attack (PMA). By designing a destination model with low test accuracy, the proposed algorithm aims to generate perturbation so that when a model is trained on a mixture of clean data and poison data, the resulting model approaches to the destination model. In the evaluation, the proposed method shows superior performance on four benchmarks when the poison ratio is 80%, 60%, and 40%, respectively. Although the PMA was originally designed for an availability attack that degrades model performance in training with a mixture of poison data and unknown clean data, PMA's scope of application is broader by designing the property of the destination model. Specifically, it is expected that it can be extended for poisoning to achieve new types of attacks that increase the risk of model unfairness or forcing privacy leakage if the designed destination model has such property. Such extensions remain a topic for future research. Also, our proposal has two limitations: one requires a relatively larger perturbation size than previous work and another requires at least a 40% poison ratio. We want to solve these two limitations in future research. unsrt
http://arxiv.org/abs/2407.02991v1
20240703104237
Shaping Galaxies from the Beginning: Shaking the Cusp by Non-power-law Primordial Spectra
[ "M. V. Tkachev", "S. V. Pilipenko", "E. V. Mikheeva", "V. N. Lukash" ]
astro-ph.CO
[ "astro-ph.CO" ]
mtkachev@asc.rssi.ru spilipenko@asc.rssi.ru helen@asc.rssi.ru lukash@asc.rssi.ru Astro Space Center of P.N. Lebedev Physical Institute, Moscow, Russia § ABSTRACT We consider three cosmological models with non-power-law spectra of primordial density perturbations and test them against ΛCDM in density profiles. We found that, despite the significant difference in initial conditions, the mean density profiles of all models are still close to the Navarro-Frenk-White one, albeit with some dispersion. We demonstrate that the density profile slopes in the innermost part of halo have a significant evolution with z, which can be used to identify the cosmological model. We also present a toy model resulting in the appearance of core in the central part of gravitationally bound dark matter halo. Shaping Galaxies from the Beginning: Shaking the Cusp by Non-power-law Primordial Spectra V.N. Lukash July 8, 2024 ============================================================================================ § INTRODUCTION Recently non-trivial spectra of density perturbations have attracted considerable interest <cit.>. Such spectra are a significant extension of power-law spectrum of density perturbations, predicted in one-field inflationary models. Unlike to smooth power-law, they are capable to provide an additional enhancement at small scales, which can be related with Population III stars <cit.> or excess of high-z galaxies <cit.>. If an enhancement of a power spectrum is high, primordial black holes can be born (see <cit.>). In this paper we consider two kinds of primordial spectrum of density perturbations. The first one (model gauss_k15) has the Gaussian bump and was studied in <cit.> to clarify its impact on the halo mass function. The simplest way to produce a power-law spectrum with bump is to consider a single-field inflation with a kink in the potential (see <cit.> and more detailed consideration in the recent review <cit.>). The specific shape of the spectrum can vary and depends on the inflationary model. Therefore, instead of analyzing inflationary potentials, one can use a phenomenological approach, assuming a single general feature added to the power-law spectrum. As in our previous work <cit.> we choose as such a feature a Gaussian. Another method to enhance the power spectrum of density perturbations is to assume a double, or multiple as a more common case, field inflation. However, in the latter case it is still assumed to observe a rather short interval of inflationary stage compared to the total duration, so it may be unobservable, and hence such multistage inflation will be perceived as double one. In this kind of inflation the primordial spectrum of density perturbations has a minimum at some scale (see, for example, fig. 1 in <cit.>). Such spectrum has a “red” wing on large scales and “blue” one at small scales, and precise forms of wings depend on the total inflaton potential. Following to notification proposed in <cit.> we refer these models as “blue tilted” ones. Earlier, we have been studying the effect of power spectrum with a bump on the halo mass function, hereafter we concentrate on its influence on the density profile of dark matter halos. As the gauss_k15 model is in good agreement with observational data on high-z galaxies found by JWST, we use it as the first choice and add two blue tilted models with different tilts and scales of a minimum. Therefore, in paper we test cosmological models with different deviations at sub-Mpc scale in density profile and study their influence on the structure of the innermost part of dark matter halos. Profiles of dark matter halos for the “non-standard” spectra have been studied several times: <cit.> analyzed scale-free spectrum with varying power law slope, a number of studies consider warm dark matter with the power spectrum cutoff. To our knowledge, there were no studies of the inner structure of halos for the two variants we consider here: the bumpy spectrum and the blue tilted one. Previously we have shown that the complex interaction of the small and large scale perturbations can result in the significant change of halo profiles <cit.>. This motivates us to check if the non-standard shape of the spectrum can also lead to similar effects or some other deviations of the density profiles of halos from the expected in the ΛCDM cosmology. The shape of density profile of dark matter (DM) halos has been widely discussed over 30 years, and it is known as the cusp problem. It can be summarized as a discrepancy between observations and simulations in the inner slope of radial density profile: simulations predict a profile with the behavior close to r^-1 at small radii (cusp) while observations show the existence of constant density cores at least in some DM-dominated galaxies. More details can be found in, e.g., <cit.>. A number of solutions to the cusp problem have been proposed, which can be divided into four classes: errors in the interpretation of observations, errors in simulations, baryonic physics, and the manifestation of the “new” physics. All the proposed solutions have some drawbacks, so there is no final consensus on the solution of the cusp problem yet <cit.>. Numerical N-body simulations in cosmology are widely used and have been tested for convergence many times <cit.>, however there are still some nuances. First, all the simulations cut the initial power spectrum of density perturbations at some scale (corresponding to the Nyquist wavenumber). Since in the standard WIMP ΛCDM model the spectrum does not drop in amplitude up to very small scales (the free-streaming scale for 100 GeV WIMP is about 10^3 AU <cit.>), the dark matter actually should be very clumpy at small scales. The lack of resolution results in numerical errors close to the Nyquist wavenumber <cit.>. Also, the nonlinear evolution of small-scale perturbations missed by numerical simulations should result in some kind of heating of dark matter particles, or decreasing its mean phase space density. This "heating" can be understood in the context of Lynden-Bell's definition of entropy <cit.>, where it corresponds to a decrease in the "fine-grained" phase space density of dark matter particles. Since a cusp is a region of high phase space density (low entropy), the missing small-scale power may facilitate the formation of cuspy profiles. This has been proposed in <cit.>. Several attempts to simulate halo formation from the free-streaming scale have been made <cit.>, but this required peculiar initial conditions: in <cit.> the box size was limited to 400 pc while in <cit.> the simulated halo was selected in a void many times below the mean density. So these simulations do not fully answer the question of how the perturbations at very small scales (free streaming) interact with perturbations at much larger scales (galaxy scale), because larger scale waves were significantly damped by the choice of the initial conditions. On the other hand, high resolution simulations show that with the increase of resolution (the number of particles per halo) the profile changes from the Navarro-Frenk-White (NFW) one <cit.> to the Einasto profile <cit.> with somewhat shallower density in the center. This is expected from the theory proposed in <cit.>. However, the Einasto profile still cannot explain the observations of cored galaxies, since the size of the core in them is much larger than the region in Einasto profile with the shallow density slope. Second source of possible errors in simulations is the fact that the cosmological codes give an approximate solution for the N-body problem (see, e.g., <cit.>). It has been noted that this may result in the cusp being an attractor solution of an approximate N-body <cit.>. Also simulations are prone to artificial disruption of satellites which may bring additional DM particles to halo centers in simulations <cit.>. While the propositions of <cit.> and <cit.> are hard to check or improve, the idea of <cit.> predicts that by introducing additional power on small scales one can compensate the deficit of small scale perturbations in simulations. In this Paper we aim at testing this prediction by adding this power. This can be done in several ways, either by changing the power spectrum, or by introducing small scale random velocities. So we have double interest in simulating universes with bumpy or tilted power spectra. First, such spectra may be physical as they can arise in various inflation models. If such models produce significant amount of cored halos, the density profile can be used as a test for these inflationary models. Second, the addition of small scale power allows us to check the ideas proposed by <cit.> and check if the missing small scale power is promoting cusp formation. § MODELS DESCRIPTION In order to investigate the impact of the spectrum modification on the evolution and the inner structure of dark matter halos, we employed power spectra constructed as the product of the standard ΛCDM spectrum and a certain transform function. In case of the Gaussian bump it remains the same as in our previous work <cit.>: T(k) = 1 + A ·exp( -(log(k)-log(k_0))^2/σ_k^2), where k is a wave number, A, k_0, and σ_k are bump parameters and we assume a value of σ_k=0.1. As before, we would like to point out that the shape (<ref>) is not predicted directly by simple modifications of the inflation model (see, e.g., <cit.>). We consider the Gaussian shape as a simple approximation of the peak shape in various models. Additionally, the transform function for tilted spectra is calculated as follows: T(k) = √(1 + 1/p(k/k_0)^2p+2), where p is constant. The transform function essentially defines a smooth transition from the constant T = 1 to a power-law function T(k) = k^p+1, where the parameter k_0 defines the value of wave number where the transition happens (either 10 or 100, in our case). In the Table <ref> we provide the most relevant parameters of our simulations, including the values of constants from the eqs. (<ref>) and (<ref>). Figure <ref> illustrates shapes of modifications. We have run a series of four dark matter only simulations, employing the zoom technique to achieve high resolution in a specific region of interest. Each simulation utilizes a box size of (5 Mpc/h)^3. Within the zoomed region, the resolution corresponds to 2048 particles, while the intermediate levels are represented by 256, 512, and 1024 particles, respectively. Three of the simulations utilize modified matter power spectra, while one employs the standard ΛCDM spectrum for comparison. The simulations were run using the publicly available N-body code <cit.>, which is widely used for cosmological simulations. This code utilizes a combined Tree + Particle Mesh (TreeMP) algorithm to calculate gravitational accelerations for each particle by decomposing the gravitational forces into a long-range term and short-range term interaction. Notably, is designed for MPI parallelization, which results in faster execution and scalability, allowing the code to handle a large number of particles with reasonable computational resources. To account for the potential early formation of virialized structures, the simulations with tilted spectra start at z = 1500, simulation with the Gaussian bump spectrum starts at z = 1000, while the ΛCDM simulation starts at z = 300. The final redshift for all simulations is set to z = 8. This choice aims to minimize potential artifacts arising from the space periodicity of initial conditions within the relatively small simulation box. Initial conditions for the simulations are generated using the publicly available code [https://github.com/ginnungagapgroup/ginnungagap]. The matter power spectrum for each simulation is defined individually by applying the appropriate transform function. For the ΛCDM simulation, the power spectrum is generated using the publicly available code CLASS <cit.>. Importantly, the same initial random seed number is used for all simulations, ensuring that they differ solely in the amplitude of the power spectrum. Additionally, the amplitude of the longest wavelength mode in the generated initial conditions falls within 20% of the theoretical value, mitigating the impact of cosmic variance on the high-mass end of the halo mass functions. For each simulation, 100 snapshots are stored at redshift intervals equally spaced in logarithmic scale, spanning from z = 25 to z = 8. Halo analysis is subsequently performed using the publicly available code <cit.>. This analysis assumes that each halo comprises at least 5000 particles employs a virial overdensity criterion of 200 ρ_crit and spatial resolution of the grid is limited to 5/2^18 Mpc/h. We also performed a similar analysis for the case where halo consisted of 50 particles (which is a default setting for ) and found no significant differences, therefore 5000 was taken as a more lightweight option. All simulations share the same cosmological parameters in agreement with the values obtained by the <cit.>, i.e. Ω_m=0.31, Ω_Λ=1-Ω_m=0.69, Ω_b=0.048, h=0.67, n_s=0.96. § DENSITY PROFILES The universality of the NFW profile ρ(r) = ρ_0(r_s/r)(1 + r/r_s)^-2 has been discussed for many years <cit.>. Numerical simulations provided under different assumptions result in the same shape independent on them. However, the class of non-power-law spectra of density perturbations has not yet been studied. To investigate the impact of non-power-law primordial spectra on the internal structure of dark matter halos, we analyze the density profiles obtained from our simulations. Figure <ref> showcases the mean density profiles for halos with a mass M ≃ 10^8-10^9 M_⊙, which roughly corresponds to the mass range where the difference between halos from ΛCDM model and modified spectra models should be the most significant due to extra power at redshifts between z=9 and z=10. While the profiles exhibit similarities, subtle differences emerge between the various models, particularly in the inner regions. This suggests that modifications of the primordial power spectrum can indeed influence the central density distribution within dark matter halos. To quantify these differences, we employ power-law approximations for the density profiles, as exemplified in Figure <ref> for the several most massive halos at redshift 8. However, fitting the central part of the halo profile (as was done in e.g. <cit.>) might not be sufficient, as was shown by <cit.>, since the slope of the profile depends on the halo concentration, and is shallower for less concentrated halos. Therefore, we attempt to eliminate this bias by fitting the power-law only to a specific radius range, where the concentration of different models is supposed to be similar, i.e., we fit a power-law function only to the part of the halo where 1/5r_s < r < 1/2r_s. Here r_s is the scale radius of the halo and can be calculated as r_s = R_vir/c, where R_vir is the halo radius and c is a halo concentrations (as defined in <cit.>). For NFW halo r_s indicates the transition between the central and the peripheral regions of halo, therefore, the choice of such radius range suggests that we focus on the central regions. Unfortunately, we can not set the range significantly lower than 1/5r_s, as our resolution does not allow that. Although, as can be seen from the Figure <ref>, for each halo the range of approximation is slightly different, and for heavier halos it is shifted towards the periphery. Once we have established the methodology for fitting the power-law to the halo profiles from our simulations, we also compare each profile with the corresponding NFW profile, using the calculated r_s values for each given halo, while ρ_0 is calculated from the integrated virial mass M_vir (which we take as the mass of the halo) within the virial radius of the halo r_vir: M_vir = 4 πρ_0 r_s^3 [ ln( r_s + r_vir/r_s) - r_vir/r_s + r_vir]. Further we calculate the slope of the resulting NFW profile at the same radius range of 1/5r_s < r < 1/2r_s, which for NFW profile should not vary for the same fractions of r_s (e.g. the slope of the NFW profile at r = r_s should be equal to -2 by definition). The top panel of the Figure <ref> shows the evolution of the median slope α of the halo profiles (solid lines) at range 1/5r_s < r < 1/2r_s as a function of redshift between z ≃ 8 and z ≃ 18. The dashed lines show the median slope for the NFW profiles at the same radius range – which, as expected, remains constant α_NFW≃ -1.5. Additionally, the bottom panel of the Figure <ref> shows the percentage of halo profiles from our simulations that have a slope α>α_NFW less steep than the according NFW profile in the same radius range 1/5r_s < r < 1/2r_s. The panels indicate that at higher redshifts the median slope of the profiles for all models (including ΛCDM) is significantly steeper than for the according NFW profile. On the other hand, compared to ΛCDM model, the median profile slopes behave differently for different modified spectra models, such as the Gaussian bump model gauss_k15 has profiles with significantly smaller slopes, while the model b-tilt_k100 demonstrates significantly higher halo slopes. The percentage of halos exhibiting shallower profiles than the NFW profile also differs for most models, such as for b-tilt_k10 and gauss_k15 models it remains constant at approximately 5% and 20% respectively, while for ΛCDM and b-tilt_k100 models it increases with time from 5% and 20%. At smaller redshifts these differences between models become smaller and almost disappear at z ≃ 8, but b-tilt_k10 stays alone. This behavior becomes more apparent if we look at the according distributions of slopes for different models at redshifts z = 8.091 and z = 13.321, as displayed by top panel of Figure <ref>. The distributions exhibit quite significant (and varying) left-side tails, while at smaller redshifts the tails decrease and all 4 models start demonstrating almost identical distributions. Additionally, the bottom panel displays the dependence of halo profile slope from mass of the halo. As can be seen, all the simulations on both panels exhibit approximately the same trend, where smaller mass halos generally have steeper profile slopes, but at higher redshifts the dispersion for smaller mass halos is significantly higher. These findings challenge the universality of the NFW profile and highlight the sensitivity of halo structure to the shape of the primordial power spectrum. Modifications to the power spectrum, such as the Gaussian bump and tilted spectra explored in this work, can lead to significant deviations in the inner density profiles of dark matter halos, particularly for less massive halos and at earlier epochs of the Universe. § COMPENSATION OF THE SMALL SCALE POWER CUT In this Section we consider the impact of the small scale power which was cut from the simulation due to the limited resolution of the initial conditions. The dimensionless power spectrum of density perturbations, Δ(k) ≡ k^3 P(k) behaves as ∼log(k) at large k until the free streaming scale. This scale is usually not resolved in cosmological simulations, so the density perturbations below the Nyquist scale k_Ny = π/(L_box N_1D) are missing. Since Δ(k) grows towards large k, these missing perturbations should have became nonlinear earlier than the perturbations resolved by the simulation. As has been proposed in <cit.> the missing perturbations could generate additional entropy which could destroy cusps. We try to emulate the effect of missing perturbations on density profiles in several ways. First of all, one can add power at small scales resolved by the simulation. This actually was done in our simulations with various non-standard power spectra described in Section <ref>. However, as was shown in Section <ref>, this has not lead to a significant flattening of the cusps. However considering the proposal of <cit.>, one also could directly add the entropy produced by missing perturbations in a form of random velocities. The amount of velocities can be estimated using the linear theory. We assume that when a particular scale becomes nonlinear, the particle velocities are randomized but have amplitudes in accordance with the linear theory. This is supported by measurements of the velocity dispersion in the ΛCDM simulation shown in Fig. <ref>, left panel. In that Figure, velocity dispersions are shown at two different redshifts. At z=25 all the shown scales are in linear regime and the simulation data almost coincides with the linear theory calculation. At z=8 all the plotted scales are in the non-linear regime, but the velocity dispersion is still well described by the linear theory. According to this argument, the Nyquist scale in our simulation goes nonlinear at z=25 and linear theory velocities at this scale and time have amplitude of σ_v = 0.5 km/s. To test the impact of random velocities on density profiles we run a set of simulations with random velocities added at the simulation snapshot at z=25. One should note that these random velocities evolve like a decaying mode of perturbations, so they decrease with time as (1+z). Fig. <ref>, left panel, shows that velocities do not decline after perturbations getting nonlinear. If we consider the last snapshot of our simulation, at z=8, the amplitude of linear theory velocities at Nyquist scale is 1.5 km/s, and if we take into account the decay of random velocities, we should set σ_v=4.8 km/s at z=25 to get 1.5 km/s at z=8. This is the maximal velocity dispersion which can be achieved in the frame of this `lost entropy' proposal of <cit.>. Since this behavior of decaying mode results in some uncertainty in the initial velocity dispersion which is needed to compensate the lost small-scale perturbations, we set σ_v=0.5, 1.0, 2.0, 4.0, 8.0 and 16.0 km/s at z=25 and check how it affects the density profiles of halos. The density profiles obtained in these random velocity simulations are shown in Fig. <ref>, right panel. One can see that adding random noise with σ_v = 8 km/s indeed results in the flattening of the cusp, however this velocity is higher than the estimate obtained in theory. These high velocities also produce another visible effect: damping of small scale density perturbations, and, as a result, significant decrease of the number of low massive halos. This is a side effect which should not be present in an ideal simulation with `infinite' resolution. We conclude that the addition of random velocities to particles allow to destroy cusps in density profiles, however the amplitude of the velocities needed for this is higher than expected to compensate the missing small scale power, and also such `compensation' results in an artificial destruction of small halos. § CONCLUSIONS AND DISCUSSIONS We studied the impact of the primordial spectrum on the density profile of gravitationally bound DM halos. All considered models have some enhancement on a scale less than 1 h^-1 Mpc, but it was realized in different ways. One spectrum had a bump at k_0=15 h Mpc^-1, and two spectra are blue-tilted with characteristic scales k_0=10 h and 100 h Mpc with small-scale additional (to standard ΛCDM) slopes 1.5 and 3.6, consequently. More detailed description of considered models can be found in eqs. (<ref>)-(<ref>), the Table <ref>, and the Figure <ref>. We analyzed the evolution of individual and averaged density profiles for the redshift interval from z=18 till z=8 (the latter was the final value in our N-body simulation) and found out that the median profile slopes in all models (including standard ΛCDM one) are steeper in comparison to the NFW profile. The Figure <ref> demonstrates that density profiles of all cosmological models vary with redshift z from cuspy values in the interval from α∈(-3;-2) to -1.5 which is just the value for the slope of NFW profile at r_s/5 (dashed lines). Despite these slopes corresponding to a cusp in the inner part of a halo, the percentage of density profiles with a shallower slope α>α_NFW is close to 5% for b-tilt_k10 model and to 15 - 20% for the rest of models, e.g. we detect in simulations some number of cored halos. As to evolution of the percentage with redshift, it can be considered as negligible due big uncertainties at high z. Looking for the z-evolution of probability density of finding a halo with some slope value, we find that it becomes sharper at smaller redshift showing a tendency to unify the density profiles of halos. We also study how the variety of profile slopes appears for different halo masses and despite the significant difference in initial conditions at different redshifts. The figure <ref> demonstrates that the low-massive tail of halos (M>10^8M_⊙) at z≃ 13 has a more cuspy slope then the more massive one and for smaller redshift. To clarify a possible way to solve the cusp problem by enhancing a matter spectrum, we also considered a toy model with boosted small scale random velocities. We find that it results in the abundant generation of cored halos accompanied by suppression of sub-halos. It could be considered as an elegant solution of the to-big-to-fail problem too, but this solution has a high price, which is a rather high value of the velocity. The work was supported by the Russian Science Foundation (grant number 23-22-00259).
http://arxiv.org/abs/2407.02414v1
20240702164027
Exploring the parameter dependence of atomic minima with implicit differentiation
[ "Ivan Maliyov", "Petr Grigorev", "Thomas D Swinburne" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.comp-ph" ]
ivan.maliyov@cnrs.fr thomas.swinburne@cnrs.fr Aix-Marseille Université, CNRS, CINaM UMR 7325, Campus de Luminy, Marseille 13288, France § ABSTRACT Interatomic potentials are essential to go beyond ab initio size limitations, but simulation results depend sensitively on potential parameters. Forward propagation of parameter variation is key for uncertainty quantification, whilst backpropagation has found application for emerging inverse problems such as fine-tuning or targeted design. Here, the implicit derivative of functions defined as a fixed point is used to Taylor expand the energy and structure of atomic minima in potential parameters, evaluating terms via automatic differentiation, dense linear algebra or a novel sparse operator approach. The latter allows efficient forward and backpropagation through relaxed structures of arbitrarily large systems. The implicit expansion accurately predicts lattice distortion and defect formation energies and volumes with classical and machine-learning potentials, enabling high-dimensional uncertainty propagation without prohibitive overhead. We then show how the implicit derivative can be used to solve challenging inverse problems, minimizing an implicit loss to fine-tune potentials and stabilize solute-induced structural rearrangements at dislocations in tungsten. Exploring the parameter dependence of atomic minima with implicit differentiation Thomas D Swinburne July 8, 2024 ================================================================================= results: - algebraic expression for the implicit derivative is the dense and sparse formulations - **generalization for NVT** - discussion of memory usage, plots in SM - simulation results method: - details of implementation in jax/lammps - tuning of alpha parameter, other computational considerations e.g. scaling - Molecular statics, quantities depending on the parameters. Parameters connect two simulation regimes. You can't do everything ab initio. All you get from ab initio - a set of parameters from training data - a single set of parameters. In reality you have the distribution of parameters. Any output of the simulations should bare info about the distr. of parameters. Connection between low and high level of computations. § INTRODUCTION Atomic simulations employing interatomic potentials are an essential tool of computational materials science <cit.>. Classical models such as Lennard-Jones potentials are central to the study of glasses and polymer systems, whilst modern, data-driven models are becoming quantitative surrogates for ab initio calculations <cit.>. For solid-state materials, the energy landscape of relaxed atomic geometries is central to exploring thermodynamic, diffusive and mechanical properties  <cit.>. Regardless of the interatomic potential employed, changing parameters will change any quantity of interest extracted from a simulation. For simple classical models, parameter variation is essential to explore the model's phenomenology <cit.>. For modern data-driven models which target quantitative accuracy, parameter uncertainties can be estimated by Bayesian regression on training data <cit.> and should be forward propagated to simulation results to bound quantities of interest <cit.>. Any forward propagation scheme must account for the strong correlation between individual energy or force evaluations when calculating e.g. formation energies or dynamical averages, as these will strongly affect (typically reduce) uncertainty on the final simulation result. Backpropagation of structural modifications to changes in parameters is finding application in training interatomic potentials from experimental data <cit.> or tuning simple interatomic potentials to reproduce desired self-assembly kinetics <cit.>. More recently, `universal' machine learning potentials have shown near-quantitative accuracy across large portions of the periodic table <cit.>. This has raised interest in back-propagation for fine-tuning universal models for specific applications <cit.>, and opened the possibility of targeted design through navigation of a smooth latent composition representation <cit.>. Forward and backpropagation of parameter variation through complex simulations is typically achieved using reverse-mode automatic differentiation (AD) routines, which offer high arithmetic efficiency at the price of large memory requirements  <cit.>. Whilst AD offers clear advantages in implementation, their significant memory burden complicates application to large atomic systems, especially when targeting higher derivatives beyond forces. Existing methods to propagate variation in parameters to variation in simulation results thus typically employ resampling: new parameters are drawn from some distribution <cit.> and simulations are repeated. Whilst conceptually straightforward, back-propagation is not possible, assessing convergence is challenging, and the cost can be potentially very large, especially for simulation results requiring geometry minimization for each potential sample. In this paper, we explore an alternative approach, analytically expanding the structure of relaxed minima to first order in parameter variation, giving a second order expansion of the total energy. The expansion is achieved through evaluation of an implicit derivative <cit.>, i.e. the derivative of a function defined as fixed point, in this case, the atomic structure of a local minimum (figure <ref>a). Our main results are that the implicit derivative enables 1) forward propagation of parameter uncertainties to simulation results for orders of magnitude less computational effort than resampling schemes, allowing rapid propagation of parameter uncertainties and 2) back-propagation of structural variations to target composition-induced structural rearrangements in multi-thousand-atom systems, a challenging task for any other approach (see illustration of these two ideas in figures <ref>b and <ref>c). We implement and compare methods to evaluate the implicit derivative using AD <cit.>, dense linear algebra and a novel sparse operator approach, using the  <cit.> and  <cit.> simulation codes. We find AD routines for the implicit derivative reach GPU memory limits for ∼1000-atom systems even on best-in-class A100 hardware. In contrast, our sparse operator reduces to a constrained minimization in , allowing memory-efficient and highly parallelized evaluation. This innovation allows uncertainty quantification and inverse design studies with the large atomic simulations essential to capture realistic defect structures. For the purposes of backpropagation, our expansion can be used for any form of interatomic potential. However, for the forward propagation in uncertainty quantification, the expansion captures parameter variation in the vicinity of some minimum of the loss. In this paper we therefore focus on classical <cit.> and linear-in-descriptor interatomic potentials <cit.> whose loss typically has a well defined global minimum rather than the multi-modal loss landscape of neural network potentials <cit.>. The paper is structured as follows. We first define the implicit derivative, the Taylor expansion approximations used and their evaluation using automatic differentiation or linear algebra techniques. We then describe how the implicit derivative can be used in forward and backpropagation of parameter variations. Forward propagation of parameter variation is demonstrated using classical and machine learning potentials to explore lattice distortion and vacancy defect formation <cit.>. Finally, we demonstrate back-propagation of parameter variations, finding parameter variations which stabilize subtle solute-induced dislocation core reconstructions in tungsten <cit.>. The paper is structured as follows. We first define the implicit derivative and the Taylor expansion approximation. The expansion is tested on vacancies in a binary Lennard-Jones solid, varying the AB interaction strength to introduce We then move to high dimensional linear-in-descriptor machine learning potentials <cit.>, focusing on the bSO(4) descriptors used in  <cit.> or  <cit.>. The expansion can capture large variations in the vacancy formation energy and volume, demonstrating it's use both in uncertainty quantification and the ability to explore correlative trends in defect properties. We then use the implicit derivative for inverse problems, finding interaction parameters that give solute-induced dislocation core reconstruction in bcc metals, discussing future applications in the context of mechanism-aware inverse design. § RESULTS §.§ Implicit derivative of atomic configurations We consider a system of N atoms in a periodic supercell of volume V, with atomic coordinates ∈ℝ^N×3 and a supercell matrix ∈ℝ^3× 3, V= Det(). Changes to the supercell →+δ are defined to induce homogeneous deformations, as in e.g. energy volume curves. As a result, we work with scaled atomic coordinates , such that ≡ and the tuple (,) fully defines atomic configurations with periodic boundary conditions and fixed atomic species. With a vector of N_D potential parameters , a potential energy model (, ; ), has stationary configurations (^*_,^*_) satisfying ∇_(^*_, ^*_; )≡ 0, ∇_(^*_, C^*_; )≡ 0 where (^*_,^*_) is one of the exponentially many stationary points in the energy landscape <cit.>. In the following, we only consider minima; extension to the treatment of saddle points will be presented in future work. Under a parameter variation + δ the scaled positions and supercell matrix are defined to change as ^*_+δ = ^*_ + δ∇__^* + 𝒪(δ^2), ^*_+δ = ^*_ + δ∇_^*_ + 𝒪(δ^2), where ∇__^*∈ℝ^N_D× N × 3 and ∇_^*_∈ℝ^N_D×3×3 are implicit derivatives that determine how a stationary configuration changes with the variation of potential parameters. In practical applications, variation is typically constrained; for simplicity, we will only consider fixed-volume simulations or isotropic variations controlled by a homogeneous strain ϵ^*_∈ℝ around a reference supercell C_0 ^*_= [1+ ϵ^*_]_0 , ∇_^*_=(∇_ϵ^*_)_0, where ∇_ϵ^*_∈ℝ^N_D. Taylor expanding (<ref>) to first order in δ, it is simple to show ∇__^*, ∇_ϵ_^* solve the system of linear equations [ ∇__^* ∇_ϵ_^* ] [ ∇^2_ ∇^2_ϵ ∇^2_ϵ^⊤ ∇^2_ϵϵ] =- [ ∇^2_ ∇^2_ϵ], where ∇^2_ is the Hessian matrix in scaled coordinates, ∇^2_ϵϵ is proportional to the bulk modulus of the system and -∇^2_ϵ is proportional to change in atomic forces under a homogeneous strain and ∇^2_, ∇^2_ϵ are mixed curvatures. Whilst solution of (<ref>) in principle requires 𝒪(N^3) effort due to the Hessian, we introduce a novel Hessian-free solution method below, allowing application to large systems. §.§ Taylor expansion of stationary energies and volumes using implicit derivatives In our numerical experiments, we will compare three levels of approximate solution to the linear equations (<ref>): * constant (c): varies, ∇_ϵ_^*=0, ∇__^*= 0 * homogeneous (h): and ϵ_^* vary, ∇__^*= 0 * inhomogeneous (ih): ,ϵ_^* and ^*_ all vary (full) [maybe “full" is better for the last one. Inhomogeneous could imply the only ^*_ varies.] * constant (c): ∇_ϵ_^*=0, ∇__^*= 0 * homogeneous (h): ∇_ϵ_^*≠0 , ∇__^*= 0 * inhomogeneous (ih): ∇_ϵ_^*=0 , ∇__^*≠ 0, with the full expansion then given the shorthand h+ih. Under changes in parameters , changes in the stationary energy and volume (or equivalently strain) admit the implicit Taylor expansions δ^(ζ)^* ≡δ∇_ + δ H_ζδ^⊤ + 𝒪(δ^3) δ^(ζ)ϵ^* ≡δ∇_ϵ^*_ζ+𝒪(δ^2) , where H_ζ∈ℝ^N_D× N_D is a generalized curvature within a given level of approximation and ζ=c,h,ih,h+ih is the level of approximation used. Expressions for ∇_ϵ^*_ζ and H_ζ are given in the SM. Whilst all approaches predict changes in energy, only ζ=h,ih,h+ih predict changes in structure. For constant volume relaxations, the inhomogeneous ζ=ih expansion will be asymptotically exact. For variable volume relaxations, the full ζ=h+ih expansion will be asymptotically exact, but as we show below the cheaper homogeneous ζ=h expansion can also give accurate results if we are primarily interested in changes to the energy or volume, rather than structure. §.§ Evaluation of the implicit derivative through sparse and dense linear algebra methods In the linear equations (<ref>), the ∇^2_ϵϵU and ∇^2_ϵU derivatives in (<ref>) require only a few 𝒪(N) force calls for evaluation. As a result, the homogeneous approximation requires minimal computational effort, but all knowledge of structural changes is missing as ∇_^* is not evaluated. Evaluation of ∇_^* for the inhomogeneous approach requires 𝒪(N^2) finite difference evaluation of the Hessian matrix ∇^2_U and 𝒪(N^3) solution of the dense linear equation (<ref>). Whilst of reasonable cost for small systems (N<2000), study of extended defects where 10^4<N<10^6 requires significant, typically prohibitive, computational resources and careful use of shared memory parallel linear algebra techniques <cit.>. To overcome this limitation, we note that the Hessian matrix is highly sparse due to the strong locality of atomic forces. In this regime, efficient solutions of the linear equation (<ref>) can be obtained using iterative algorithms such as . In addition, such algorithms do not require access to every element of the Hessian at each iteration, only a linear operator which gives the action of the Hessian on some vector V, i.e. ℒ( V)= V∇^2_U. Avoiding direct Hessian evaluation can give a much faster time-to-solution. We define the operator ℒ( V) ≡lim_α→0∇_(^*_+α V, ^*_; )/α , which clearly limits to ℒ( V)= V∇^2_ as desired and only requires 𝒪(N) force calls for evaluation, repeated for each vector [∇__^*]_l∈ℝ^N×3, l∈[1,N_D] of the implicit derivative. Details of our 𝒪(NN_D) massively parallel method to compute ∇_^* in  <cit.> through the solution of (<ref>) using (<ref>) at finite values of α is described in methods section <ref>. In section <ref> we apply the method to enable use of the implicit derivative in large atomic systems. §.§ Evaluation of the implicit derivative with automatic differentiation methods AD-enabled simulation schemes such as  <cit.> can clearly evaluate all terms in equation (<ref>) or the sparse operator approach (<ref>). Recently, implicit differentiation schemes have been implemented in  <cit.>, allowing direct evaluation of e.g. ∇_^* by differentiating through the minimization algorithm chosen in . We have implemented and tested all approaches in AD for the binary Lennard-Jones system described below. Despite the simplicity of the potential form, using AD to evaluate terms in (<ref>) or using AD implicit derivative schemes incurs extremely large memory usage, reaching the 80GB limit on A100 GPUs for only a few thousand atoms, as we detail in the supplementary material (SM). We thus conclude that existing AD schemes for direct evaluation of the implicit derivative or Hessian matrices are ill-suited for application to the thousand-atom systems essential for many materials science problems. In contrast, our sparse operator (<ref>) has the same memory usage as any structural minimization. Whilst still incurring a significantly greater memory burden than non-AD methods implemented in , our sparse operator is ideal for implicit derivative evaluation in AD-enabled schemes. Investigation of how (<ref>) can be used with neural network-based interatomic potentials<cit.> is left for a future study. §.§ Backpropagation of parameter variations for inverse design problems Inverse design aims to produce materials with specified (desirable) properties through inverting structure-property relationships. However, this typically requires high-throughput searches which necessarily cannot afford to perform atomistic simulations of e.g. defect structures and mechanisms to predict mechanical properties. The implicit derivative can be used to make a first step towards gradient-led inverse design, allowing us to find interatomic parameters that stabilise some atomic configuration ^* observed in an ab initio-accurate simulation e.g. DFT or hybrid DFT-ML <cit.>. With ^*_ being the local minimum found when minimizing U(,;) starting from ^*, we can write an implicit loss function L() = 1/2 [^*_ - ^*]:[^*_ - ^*] where we use the notation : to indicates summation over the N atomic sites and 3 spatial indices. The implicit derivative is necessary to compute the derivative of the loss via the chain rule: ∇_ L() = ∇_^*_ : [ ^*_ - ^*]∈ℝ^N_D. An immediate application of (<ref>) is the ability to `fine-tune', or retrain<cit.>, interatomic potentials to reproduce important DFT minima. Fine-tuning has gained increasing interest following the rise of `foundation model' machine learning potentials <cit.>, which are beyond the scope of this study. In practice, one typically includes the loss against the original training database to regularize the fine-tuning fit. However, we have found that minimizing (<ref>) in practice produces only very small perturbations to the final potential. The ability to fine-tune interactions is of particular relevance for low energy structures such as dislocation lines<cit.>, which typically require careful weighting in potential fitting<cit.>. In section <ref>, we use implicit loss minimization to find solute substitutions which induce `hard' screw dislocation core reconstruction in tungsten, and fine-tune a tungsten-beryllium potential to correctly reproduce ab initio observations <cit.>. §.§ Prediction of the total relaxed energy in classical potentials with automatic differentiation The binary Lennard-Jones potential is a classical model for nanoclusters and glassy systems <cit.>. The model is defined by six parameters =[ϵ^ LJ_ AA,ϵ^ LJ_ AB,ϵ^ LJ_ BB,σ_ AA,σ_ AB,σ_ BB], with a total energy (,;) = ∑_i ∑_j∈ N_iϵ^ LJ_s_is_j( σ^12_s_is_j/r^12_ij - σ^6_s_is_j/r^6_ij), where s_i is species i, s_i∈[A,B], r_ij is the minimum image distance (as determined by ) between atoms i,j and N_i is the set of neighbors of i. As discussed above, automatic differentiation enabled by was used to study this simple system, with all examples shown using the dense linear algebra approach to evaluate the implicit derivative at constant volume. To simplify the problem, we set ϵ^ LJ_ AA=ϵ^ LJ_ AB=ϵ^ LJ_ BB and σ_ AA=σ_ BB=1, leaving Θ=σ_ AB as the only varying parameter in this example. When σ_AB=1, all atoms are identical and the system is a unary fcc lattice; we additionally remove one atom to form a vacancy and promote additional deformation. When σ_AB≠1 the system becomes a random fcc binary alloy, with lattice distortion in the bulk and around the vacancy (see Fig. <ref>a and <ref>b). Figure <ref>c shows the inhomogeneous implicit derivative around σ_AB=1 that gives an excellent prediction of the total energy and lattice distortion for σ_AB∈[0.95,1.05], with mild disagreement as |σ_AB-1| grows. At constant volume both constant and homogeneous approximations are equivalent and predict no structural change, with significantly higher errors. However, in the remainder of this paper, we focus on modern machine learning potentials. §.§ Prediction of defect formation energies and volumes with machine learning potentials Almost all modern interatomic models use high-dimensional regression techniques from the machine learning community<cit.>. A common first step is to represent atomic environments as N_D per-atom descriptor functions D_l(,,i), l∈[1,N_D] which represent the atomic environment around an atom of index i. In practice, the descriptor functions also have species-dependent hyperparameters which must also be tuned, but in the following, we assume that these are fixed. We use the widely implemented SNAP Bispectrum descriptors <cit.> with N_D=55 (see methods), giving a potential energy U(,;) = ∑_i D(,,i) ≡·(,), where ∈ℝ^N_D is the potential parameter vector and (,)∈ℝ^N_D is the total descriptor vector. The advantage of the linear functional form is that ∇_ U= D and ∇^2_U=∇_ D, required for the solution of equation (<ref>), are readily evaluated without numerical or automatic differentiation schemes. As discussed above, our recently introduced UQ technique <cit.> (methods <ref>) is used to produce a posterior parameter distribution π(), from DFT training data, from which we draw samples to form an ensemble {_m} of 100-1000 potentials. In the supplementary material we detail an application to vacancy defects in pure tungsten, determining π() using data from Goryaeva et al. <cit.>. Both the homogeneous and inhomogeneous expansion were found to provide an excellent prediction of the vacancy formation energy and formation volume. The inhomogeneous expansion gave slightly better overall performance for the energy predictions, as expected, and is essential to study structural rearrangements as we demonstrate in the next section. We have ∼10 times smaller formation energy error when the inhom. part is taken into account, I'd say it is a significant improvement. We use DFT training data for tungsten from Goryaeva et al.<cit.>, generating 100 samples _m from a parameter distribution using the approach described in <cit.>. The potential samples are suitable for multi-scale uncertainty quantification, for which a more detailed study will be presented elsewhere. Here, we are primarily interested in using the variations of parameter samples away from the reference potential to test our implicit expansion method. To this end, we define an additional `perturbation magnitude' λ and generate samples with (λ,m) = + λ(_m - ), such that λ=0 corresponds to the reference potential and λ=1 corresponds to the original sample _m. We then generated very large (and thus often unphysical) perturbations with λ∈[-25,25], with a step Δλ=0.2, truncating only when the bcc lattice became unstable. This yielded a total ensemble of around 20000 stable potentials. For each potential, we calculated the formation vacancy defect, allowing for relaxation of both structure and volume, meaning that only the full ih+h expansion is expected to be asymptotically exact. The diversity of the resultant dataset allows for a robust test of our implicit Taylor expansions (<ref>) and (<ref>). Figures <ref>a and <ref>b illustrate this approach. As expected, this form of perturbation applied to all stable potentials produced a wide range of very strong perturbations. Fig. <ref>c shows the formation volume variation across the samples as a function of perturbation magnitude λ. Both homogeneous and inhomogeneous expansions provide a nearly-perfect predictions of vacancy formation volume. However, for the formation energy, the inhomogeneous approach is notably more accurate, with errors under 2% across a wide (3 eV) range of formation energies (see Fig. <ref>d). This indicates that whilst the efficient homogeneous expansion offers a very useful prediction of energy and volume changes, the inhomogeneous term allows for accurate prediction at small to moderate perturbations. This asymptotic accuracy is particularly important when using the implicit derivative to solve inverse problems, which we discuss in the next section. §.§ Implicit loss minimization applied to solute-induced dislocation core reconstruction In this final section we employ the implicit derivative concept to solve a challenging inverse problem: with a starting potential , and some stationary configurations ^*_, we search for the potential parameters that stabilize a structure as close as possible to some target configuration ^*. As discussed in section <ref> this has application for potential fine-tuning, as we demonstrate below. More generally, the ability to find parameters which yield certain desired structures represents a first step towards a range of inverse design strategies, in particular, given the ability of emerging foundation models to smoothly interpolate across chemical space <cit.>. To demonstrate minimization of the implicit loss function <ref>, we selected a computationally challenging system of a ∼2000-atom tungsten disk with a ⟨111⟩/2 screw dislocation along the disk axis. As detailed in <cit.>, the outer layers of atoms in the disk are fixed to displacement from elasticity theory and we impose periodic boundary conditions along the dislocation line direction. Using initial potential parameters _0 for W from Goryaeva et al. <cit.>, the dislocation core relaxes into the `easy' core structure in agreement with ab initio, as illustrated in Fig. <ref>a. For the inverse problem, we assign one `alchemical' atom (the red atom in Fig. <ref>a) with it's own independent set of parameters , initially set to _0. We use a simple form for the linear multi-specie potential, detailed in the methods (<ref>). Modifying the alchemical potential parameters whilst keeping _0 fixed, we aim to stabilize the `hard' and `split' core structures which are unstable for W <cit.>. We generated the target core configurations with the  <cit.> Python package. As the target structures are for pure, single-element W we do not expect the structure induced by the alchemical solute to give an exact match, but we can monitor the effective dislocation core position using the strain-matching approach detailed in <cit.>. The implicit loss minimization is achieved through the procedure presented in Algorithm <ref>. The loss gradient is computed according to equation (<ref>). We employ an adaptive step size h as detailed in SM. Figure <ref>b shows the maximal deviation of atomic positions at iteration k for two target structures. The minimization error decreases significantly at first steps and saturates at ∼10 iterations for both structures. The error does not reach zero, which we attribute to the target configurations being derived from pure tungsten systems, whereas our minimization involves systems that more closely represent substitutional defects. However, the minimization goals are clearly achieved as seen in Figure <ref> panels c-f: the hard core (Fig. <ref>c,d) and split core (Fig. <ref>e,f) are located at expected positions denoted by solid triangles in Fig. <ref>c,e. As a final application, we show how the implicit loss minimization can be used to fine-tune an initial interatomic potential to match ab initio training data. Here, our target is a solute-induced reconstruction of the ⟨111⟩/2 screw dislocation core caused by an interstitial Be atom, using the same disk geometry as described above and shown in Fig. <ref>a. The target data was generated using the QM/ML simulation method which embeds a DFT region at core as detailed in methods <ref>. As shown in Fig. <ref>c, we see that Be induces reconstruction to the `hard' core structure, with the Be interstitial sitting at the center of the dislocation. Using W-Be ab initio training data from Wood et al. <cit.>, we created an initial set of Be interaction parameters using the same linear multi-species interatomic potential as above (methods <ref>) with W parameters _0 set to those from Goryaeva et al. <cit.>. The relaxed structure using the initial potential fit is shown in Fig. <ref>b. It can be seen that in contrast to the QM/ML target, the dislocation core remains in the `easy' configuration, with the Be atom lying outside of the central core region. Using the procedure described above, we performed implicit loss minimization using this initial relaxed configuration. As shown in the SM, the implicit loss achieved near-perfect reproduction of the core reconstruction in around 20 iterations. Future work will investigate more sophisticated minimization schemes than the simple gradient descent used here. § DISCUSSION In this paper, we have investigated the use of the implicit derivative of the relaxed atomic structure with interatomic potential parameters, giving a first-order implicit Taylor expansion for the relaxed structures and second order for relaxed energies. We detailed how the implicit derivative could be calculated using dense linear algebra, requiring Hessian evaluation, automatic differentiation, or a Hessian-free linear operator approach, which reduces to a constrained minimization in , allowing application to arbitrarily large systems. The implicit expansion enables very efficient forward propagation of parameter uncertainties to simulation results, including the effect of geometry relaxation, essential to capture changes in structure such as strain. This was demonstrated on simple classical models and machine learning models for pure W <cit.>. The implicit expansion was able to capture a wide range of changes in energy and structure, far beyond typical variations associated with potential parameter uncertainty. A forthcoming publication will demonstrate the implicit expansion in a wide-ranging uncertainty quantification study. Beyond uncertainty quantification, our results show that the implicit expansion can also be used to rapidly explore the parameter space of high dimensional interatomic models, permitting parametric studies that would be intractable with standard methods. In future work, we will explore how the implicit expansion can be used in a correlative study of defect structures and implications for both uncertainty quantification and materials design goals. In addition to the forward propagation enabled by the implicit expansion, we also investigated the use of the implicit derivative in backpropagation of structural changes to parameter changes. Our exploratory applications focused on solute-induced dislocation core reconstruction in W <cit.>, a key feature in understanding plasticity and irradiation damage in bcc metals <cit.>. We showed how the implicit derivative could be used to `fine-tune' parameters from an initial fit against training data for WBe <cit.>, in order to stabilize the structure seen in ab initio calculations <cit.>. In a first effort towards targeted `alchemical' design applications, we used the implicit derivative to find substitutional solute interaction parameters which stabilized `hard' or `split' dislocation cores in pure W. The success of this effort extends the scope of alchemical machine learning to large-scale simulations essential to for mechanistic studies of e.g. plasticity. We anticipate that both approaches will gain ever-increasing application with the advent of general-purpose `universal' machine learning potentials, which will be a focus of future efforts. § METHODS §.§ Implementation of the sparse linear operator as a constrained minimization in Equation (<ref>) defines a linear operator which can be used in iterative solution of the linear equations (<ref>) and thus to evaluate the implicit derivative ∇__^*. To fully exploit the efficiencies afforded by the Hessian sparsity, in this section, we detail how ∇__^* can be implemented in the massively parallel simulation package. This is achieved through N_D constrained minimizations, one for each column [∇__^*]_l , l∈[1,N_D] of the implicit derivative. We have established, through a wide range of numerical tests using the full dense solution, that the off-diagonal terms ∇^2_ϵ in (<ref>) can be neglected when determining the homogeneous term ∇_ϵ^*_, meaning the inhomogeneous term ∇_Θ_l_^* satisfies [∇__^*]_l∇^2_ + [ B]_l, l∈[1,N_D], where B=∇_ϵ^*_⊗∇^2_ϵ+∇^2_∈ℝ^N_D× N×3. For the linear-in-descriptor potentials used here, the ∇^2_ϵ term can be directly accessed as derivatives of descriptors <cit.> using and related commands in . For the inverse design applications in this paper, which modified only the interaction parameters of a single solute atom without changes to the supercell strain, we have ∇_ϵ^*_=0 and thus B=∇^2_. With an initial parameter vector , we then define the modified energy function for a parameter index l∈[1,N_D] U(,;) + α [ B]_l·[-^*_], l∈[1,N_D]. This modified energy is simple to implement in through the function, with similar ease of implementation in any molecular dynamics package. It is straightforward to show that in the limit α→0 the minimizer ^*_l,α of (<ref>) gives the lth vector [∇_^*_]_l of the implicit derivative ∇_^*_ through [∇_^*_]_l = (^*_l,α - ^*_) / α. Further details regarding hyperparameter scanning for suitable values of α and comparison against the full dense linear solution employing the Hessian matrix is provided in SM. which has minimum ∇^2_U_α,l(,;) ≡ U(,;) + α [∇^2_]_l·[-^*_], l∈[1,N_D], where For a parameter Θ_l, l To implement this form of solution scheme in solve FOR METHODS Like the potential energy, each descriptor function returns a scalar, invariant under rigid transformations or permutation of identical atoms. As a result any linear or non-linear combination of descriptors satisfies they basic symmetry requirements of an energy function. In the simplified form discussed above, the parameters for a S-element SNAP model writes _SNAP=[_1,…,_S], _s=[Θ_1s,…,Θ_N_Ds], giving a total energy _SNAP(;_SNAP) = ∑_i∑_lΘ_ls_i D_l(, S,i), this simplified form of the linear-in-descriptor approach for multi-component systems has been studied in detail elsewhere. Extension to more complex forms is left for future work. CITE LAMMPS PAPERS Compute the implicit derivative through the energy minimization within . This requires defining additional terms to the force and energy terms for the minimization procedure. For each potential parameter _l, to the physical force defined by the SNAP potential we add an extra term α (∇^2_), consequently, this requires an extra term for energy α (∇^2_) ( - ^*_)^⊤. The implicit derivative over a parameter _l can be then found as: ∇^*__l = ( - ^*_) / α. §.§ Evaluation of the implicit derivative using automatic differentiation The  <cit.> package has a large variety of interatomic potentials implemented in an end-to-end differentiable form using JaX-python <cit.>. This allows us to implement the dense and sparse linear algebra schemes described above, using to produce the second derivative matricies in <ref>. Additionally, the end-to-end differentiable structure allows the use of AD implicit differentiation schemes implemented in the Python package. In the supplementary material we provide a detailed report on the computational performance of these methods, showing how the memory requirements become prohibitive for curvature-based schemes at only 1000 atoms even when using a simple Lennard-Jones interatomic potential. §.§ Multi-specie ML potentials For systems with multiple atomic species s_i∈[1,N_S], as investigated in sections <ref>,<ref>, we simply assign a new specie-dependent parameter vector as in the original SNAP paper <cit.>, giving a total energy U(,;) = ∑_i []_s_i D(,,i) ≡·(,), where ∈ℝ^N_S× N_D is the total parameter vector, s_i∈[1,N_S] and (,)∈ℝ^N_S× N_D is the total descriptor vector. In typical usage practice, the descriptor functions have species-dependent hyperparameters which must also be tuned, but here we assume these are fixed. §.§ Ab initio QM-ML calculations The ab initio reference data for Be segregation to screw dislocations in W was calculated using QM-ML hybrid simulations <cit.>, which couple ab initio and machine learning potentials. Initial structures were obtained with module <cit.>. The Be segregation calculation used the exact same approach reported in <cit.> for He segregation. Ab initio forces were evaluated using  <cit.> with 10 𝐤-points along the periodic line direction, with a cutoff energy of 500 eV and a minimization force threshold of 0.01 eV/Å. The machine learning force field was a modified potential from <cit.>, as detailed in <cit.>. The QM/ML coupling used a buffer radius of 10 Å, resulting in a total of 246 atoms, of which 168 were in the buffer. We refer the reader to <cit.> for further details. § DATA AVAILABILITY The implicit derivative implementations derived will be publicly available on GitHub following peer review. § ACKNOWLEDGEMENTS IM and TDS gratefully acknowledge support from an Emergence@INP grant from the CNRS. TDS thanks the Institute for Pure and Applied Mathematics at the University of California, Los Angeles (supported by NSF grant DMS-1925919) for their hospitality. TDS and PG gratefully acknowledge support from ANR grants ANR-19-CE46-0006-1 and ANR-23-CE46-0006-1, IDRIS allocation A0120913455. § CONTRIBUTIONS TDS designed the research program and derived the initial theoretical results. IM implemented the sparse operator, designed the implicit loss minimizer, and ran all simulations. PG generated the dislocation structures and performed the QM/ML calculations. IM and TDS wrote the paper. § COMPETING INTERESTS The authors declare no competing interests. 37 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Van Der Giessen et al.(2020)Van Der Giessen, Schultz, Bertin, Bulatov, Cai, Csányi, Foiles, Geers, González, Hütter et al.]van2020roadmap author author E. Van Der Giessen, author P. A. Schultz, author N. Bertin, author V. V. Bulatov, author W. Cai, author G. Csányi, author S. M. Foiles, author M. G. Geers, author C. González, author M. Hütter, et al., title title Roadmap on multiscale materials modeling, @noop journal journal Modelling and Simulation in Materials Science and Engineering volume 28, pages 043001 (year 2020)NoStop [Deringer et al.(2019)Deringer, Caro, and Csányi]deringer2019machine author author V. L. Deringer, author M. A. Caro, and author G. Csányi, title title Machine learning interatomic potentials as emerging tools for materials science, @noop journal journal Advanced Materials volume 31, pages 1902765 (year 2019)NoStop [Swinburne and Perez(2020)]swinburne2020automated author author T. Swinburne and author D. Perez, title title Automated calculation and convergence of defect transport tensors, @noop journal journal npj Computational Materials volume 6, pages 190 (year 2020)NoStop [Grigorev et al.(2023)Grigorev, Goryaeva, Marinica, Kermode, and Swinburne]grigorev2023calculation author author P. Grigorev, author A. M. Goryaeva, author M.-C. Marinica, author J. R. Kermode, and author T. D. Swinburne, title title Calculation of dislocation binding to helium-vacancy defects in tungsten using hybrid ab initio-machine learning methods, @noop journal journal Acta Materialia volume 247, pages 118734 (year 2023)NoStop [Wales(2003)]wales_energy_2003 author author D. J. Wales, @noop title Energy Landscapes, edited by editor C. U. Press (publisher Cambridge, year 2003)NoStop [Proville et al.(2012)Proville, Rodney, and Marinica]proville2012quantum author author L. Proville, author D. Rodney, and author M.-C. Marinica, title title Quantum effect on thermally activated glide of dislocations, @noop journal journal Nature materials volume 11, pages 845 (year 2012)NoStop [Goodrich et al.(2021)Goodrich, King, Schoenholz, Cubuk, and Brenner]goodrich2021designing author author C. P. Goodrich, author E. M. King, author S. S. Schoenholz, author E. D. Cubuk, and author M. P. Brenner, title title Designing self-assembling kinetics with differentiable statistical physics models, @noop journal journal Proceedings of the National Academy of Sciences volume 118, pages e2024083118 (year 2021)NoStop [Goryaeva et al.(2021)Goryaeva, Dérès, Lapointe, Grigorev, Swinburne, Kermode, Ventelon, Baima, and Marinica]goryaeva2021 author author A. M. Goryaeva, author J. Dérès, author C. Lapointe, author P. Grigorev, author T. D. Swinburne, author J. R. Kermode, author L. Ventelon, author J. Baima, and author M.-C. Marinica, title title Efficient and transferable machine learning potentials for the simulation of crystal defects in bcc Fe and W, https://doi.org/10.1103/PhysRevMaterials.5.103803 journal journal Phys. Rev. Materials volume 5, pages 103803 (year 2021)NoStop [Swinburne and Perez(2024)]swinburne2024parameter author author T. D. Swinburne and author D. Perez, @noop title Parameter uncertainties for imperfect surrogate models in the low-noise regime (year 2024), https://arxiv.org/abs/2402.01810 arXiv:2402.01810 [stat.ML] NoStop [Musil et al.(2019)Musil, Willatt, Langovoy, and Ceriotti]musil2019fast author author F. Musil, author M. J. Willatt, author M. A. Langovoy, and author M. Ceriotti, title title Fast and accurate uncertainty estimation in chemical machine learning, @noop journal journal Journal of chemical theory and computation volume 15, pages 906 (year 2019)NoStop [Thaler et al.(2022)Thaler, Stupp, and Zavadlav]thaler2022deep author author S. Thaler, author M. Stupp, and author J. Zavadlav, title title Deep coarse-grained potentials via relative entropy minimization, @noop journal journal The Journal of Chemical Physics volume 157 (year 2022)NoStop [Batatia et al.(2022)Batatia, Kovács, Simm, Ortner, and Csányi]batatia2022mace author author I. Batatia, author D. P. Kovács, author G. N. Simm, author C. Ortner, and author G. Csányi, title title Mace: Higher order equivariant message passing neural networks for fast and accurate force fields, @noop journal journal arXiv preprint arXiv:2206.07697 (year 2022)NoStop [Batatia et al.(2024)Batatia, Benner, Chiang, Elena, Kovács, Riebesell, Advincula, Asta, Avaylon, Baldwin, Berger, Bernstein, Bhowmik, Blau, Cărare, Darby, De, Pia, Deringer, Elijošius, El-Machachi, Falcioni, Fako, Ferrari, Genreith-Schriever, George, Goodall, Grey, Grigorev, Han, Handley, Heenen, Hermansson, Holm, Jaafar, Hofmann, Jakob, Jung, Kapil, Kaplan, Karimitari, Kermode, Kroupa, Kullgren, Kuner, Kuryla, Liepuoniute, Margraf, Magdău, Michaelides, Moore, Naik, Niblett, Norwood, O'Neill, Ortner, Persson, Reuter, Rosen, Schaaf, Schran, Shi, Sivonxay, Stenczel, Svahn, Sutton, Swinburne, Tilly, van der Oord, Varga-Umbrich, Vegge, Vondrák, Wang, Witt, Zills, and Csányi]batatia2024foundation author author I. Batatia, author P. Benner, author Y. Chiang, author A. M. Elena, author D. P. Kovács, author J. Riebesell, author X. R. Advincula, author M. Asta, author M. Avaylon, author W. J. Baldwin, author F. Berger, author N. Bernstein, author A. Bhowmik, author S. M. Blau, author V. Cărare, author J. P. Darby, author S. De, author F. D. Pia, author V. L. Deringer, author R. Elijošius, author Z. El-Machachi, author F. Falcioni, author E. Fako, author A. C. Ferrari, author A. Genreith-Schriever, author J. George, author R. E. A. Goodall, author C. P. Grey, author P. Grigorev, author S. Han, author W. Handley, author H. H. Heenen, author K. Hermansson, author C. Holm, author J. Jaafar, author S. Hofmann, author K. S. Jakob, author H. Jung, author V. Kapil, author A. D. Kaplan, author N. Karimitari, author J. R. Kermode, author N. Kroupa, author J. Kullgren, author M. C. Kuner, author D. Kuryla, author G. Liepuoniute, author J. T. Margraf, author I.-B. Magdău, author A. Michaelides, author J. H. Moore, author A. A. Naik, author S. P. Niblett, author S. W. Norwood, author N. O'Neill, author C. Ortner, author K. A. Persson, author K. Reuter, author A. S. Rosen, author L. L. Schaaf, author C. Schran, author B. X. Shi, author E. Sivonxay, author T. K. Stenczel, author V. Svahn, author C. Sutton, author T. D. Swinburne, author J. Tilly, author C. van der Oord, author E. Varga-Umbrich, author T. Vegge, author M. Vondrák, author Y. Wang, author W. C. Witt, author F. Zills, and author G. Csányi, @noop title A foundation model for atomistic materials chemistry (year 2024), https://arxiv.org/abs/2401.00096 arXiv:2401.00096 [physics.chem-ph] NoStop [Deng et al.(2024)Deng, Choi, Zhong, Riebesell, Anand, Li, Jun, Persson, and Ceder]deng2024overcoming author author B. Deng, author Y. Choi, author P. Zhong, author J. Riebesell, author S. Anand, author Z. Li, author K. Jun, author K. A. Persson, and author G. Ceder, @noop title Overcoming systematic softening in universal machine learning interatomic potentials by fine-tuning (year 2024), https://arxiv.org/abs/2405.07105 2405.07105 NoStop [Nam and Gomez-Bombarelli(2024)]nam2024interpolation author author J. Nam and author R. Gomez-Bombarelli, @noop title Interpolation and differentiation of alchemical degrees of freedom in machine learning interatomic potentials (year 2024), https://arxiv.org/abs/2404.10746 arXiv:2404.10746 [cond-mat.mtrl-sci] NoStop [Schoenholz and Cubuk(2020)]schoenholz2020jax author author S. Schoenholz and author E. D. Cubuk, title title Jax md: a framework for differentiable physics, @noop journal journal Advances in Neural Information Processing Systems volume 33, pages 11428 (year 2020)NoStop [Ablin et al.(2020)Ablin, Peyré, and Moreau]ablin2020super author author P. Ablin, author G. Peyré, and author T. Moreau, title title Super-efficiency of automatic differentiation for functions defined as a minimum, in @noop booktitle International Conference on Machine Learning (organization PMLR, year 2020) pp. pages 32–41NoStop [Blondel et al.(2022)Blondel, Berthet, Cuturi, Frostig, Hoyer, Llinares-López, Pedregosa, and Vert]blondel2022efficient author author M. Blondel, author Q. Berthet, author M. Cuturi, author R. Frostig, author S. Hoyer, author F. Llinares-López, author F. Pedregosa, and author J.-P. Vert, title title Efficient and modular implicit differentiation, @noop journal journal Advances in neural information processing systems volume 35, pages 5230 (year 2022)NoStop [Krantz and Parks(2002)]krantz2002implicit author author S. G. Krantz and author H. R. Parks, @noop title The implicit function theorem: history, theory, and applications (publisher Springer Science & Business Media, year 2002)NoStop [Plimpton(1995)]LAMMPS author author S. Plimpton, title title Fast Parallel Algorithms for Short-Range Molecular Dynamics, @noop journal journal Journal Computational Physics volume 117, pages 1 (year 1995)NoStop [Xie et al.(2023)Xie, Rupp, and Hennig]xie2023ultra author author S. R. Xie, author M. Rupp, and author R. G. Hennig, title title Ultra-fast interpretable machine-learning potentials, @noop journal journal npj Computational Materials volume 9, pages 162 (year 2023)NoStop [Del Masto et al.(2024)Del Masto, Baccou, Tréglia, Ribeiro, and Varvenne]del2024insights author author A. Del Masto, author J. Baccou, author G. Tréglia, author F. Ribeiro, and author C. Varvenne, title title Insights on the capabilities and improvement ability of classical many-body potentials: Application to α-zirconium, @noop journal journal Computational Materials Science volume 231, pages 112544 (year 2024)NoStop [Thompson et al.(2015)Thompson, Swiler, Trott, Foiles, and Tucker]Thompson_snap_2015 author author A. P. Thompson, author L. P. Swiler, author C. R. Trott, author S. M. Foiles, and author G. J. Tucker, title title Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials, https://doi.org/10.1016/j.jcp.2014.12.018 journal journal J. Comp. Phys. volume 285, pages 316 (year 2015)NoStop [Allen et al.(2021)Allen, Dusson, Ortner, and Csányi]allen2021atomic author author A. E. Allen, author G. Dusson, author C. Ortner, and author G. Csányi, title title Atomic permutationally invariant polynomials for fitting molecular force fields, @noop journal journal Machine Learning: Science and Technology volume 2, pages 025017 (year 2021)NoStop [Podryabinkin and Shapeev(2017)]podryabinkin2017active author author E. V. Podryabinkin and author A. V. Shapeev, title title Active learning of linearly parametrized interatomic potentials, @noop journal journal Computational Materials Science volume 140, pages 171 (year 2017)NoStop [Lysogorskiy et al.(2021)Lysogorskiy, van der Oord, Bochkarev, Menon, Rinaldi, Hammerschmidt, Mrovec, Thompson, Csányi, Ortner et al.]lysogorskiy2021performant author author Y. Lysogorskiy, author C. van der Oord, author A. Bochkarev, author S. Menon, author M. Rinaldi, author T. Hammerschmidt, author M. Mrovec, author A. Thompson, author G. Csányi, author C. Ortner, et al., title title Performant implementation of the atomic cluster expansion (PACE) and application to copper and silicon, https://doi.org/10.1038/s41524-021-00559-9 journal journal npj Computational Materials volume 7, pages 1 (year 2021)NoStop [Batzner et al.(2022)Batzner, Musaelian, Sun, Geiger, Mailoa, Kornbluth, Molinari, Smidt, and Kozinsky]batzner20223 author author S. Batzner, author A. Musaelian, author L. Sun, author M. Geiger, author J. P. Mailoa, author M. Kornbluth, author N. Molinari, author T. E. Smidt, and author B. Kozinsky, title title E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials, @noop journal journal Nature communications volume 13, pages 1 (year 2022)NoStop [Reali et al.(2021)Reali, Boleininger, Gilbert, and Dudarev]reali2021macroscopic author author L. Reali, author M. Boleininger, author M. R. Gilbert, and author S. L. Dudarev, title title Macroscopic elastic stress and strain produced by irradiation, @noop journal journal Nuclear Fusion volume 62, pages 016002 (year 2021)NoStop [Bartók et al.(2010)Bartók, Payne, Kondor, and Csányi]bartok2010 author author A. P. Bartók, author M. C. Payne, author R. Kondor, and author G. Csányi, title title Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons, https://doi.org/10.1103/PhysRevLett.104.136403 journal journal Phys. Rev. Lett. volume 104, pages 136403 (year 2010)NoStop [Bartók et al.(2017)Bartók, De, Poelking, Bernstein, Kermode, Csányi, and Ceriotti]Bartok_machine_2017 author author A. P. Bartók, author S. De, author C. Poelking, author N. Bernstein, author J. R. Kermode, author G. Csányi, and author M. Ceriotti, title title Machine learning unifies the modeling of materials and molecules, https://doi.org/10.1126/sciadv.1701816 journal journal Sci. Adv. volume 3, pages e1701816 (year 2017)NoStop [Stukowski(2009)]Stukowski_2009 author author A. Stukowski, title title Visualization and analysis of atomistic simulation data with ovito–the open visualization tool, https://doi.org/10.1088/0965-0393/18/1/015012 journal journal Modelling and Simulation in Materials Science and Engineering volume 18, pages 015012 (year 2009)NoStop [Vitek(1974)]vitek1974theory author author V. Vitek, title title Theory of the core structures of dislocations in body-centred-cubic metals., @noop journal journal Cryst. Latt. Def. Amorp. (year 1974)NoStop [Grigorev et al.(2024)Grigorev, Frérot, Birks, Gola, Golebiowski, Grießer, Hörmann, Klemenz, Moras, Nöhring, Oldenstaedt, Patel, Reichenbach, Rocke, Shenoy, Walter, Wengert, Zhang, Kermode, and Pastewka]Grigorev2024 author author P. Grigorev, author L. Frérot, author F. Birks, author A. Gola, author J. Golebiowski, author J. Grießer, author J. L. Hörmann, author A. Klemenz, author G. Moras, author W. G. Nöhring, author J. A. Oldenstaedt, author P. Patel, author T. Reichenbach, author T. Rocke, author L. Shenoy, author M. Walter, author S. Wengert, author L. Zhang, author J. R. Kermode, and author L. Pastewka, title title matscipy: materials science at the atomic scale with python, https://doi.org/10.21105/joss.05668 journal journal Journal of Open Source Software volume 9, pages 5668 (year 2024)NoStop [Wood et al.(2019)Wood, Cusentino, Wirth, and Thompson]wood2019 author author M. A. Wood, author M. A. Cusentino, author B. D. Wirth, and author A. P. Thompson, title title Data-driven material models for atomistic simulation, https://doi.org/10.1103/PhysRevB.99.184305 journal journal Phys. Rev. B volume 99, pages 184305 (year 2019)NoStop [Hachet et al.(2020)Hachet, Ventelon, Willaime, and Clouet]hachet2020screw author author G. Hachet, author L. Ventelon, author F. Willaime, and author E. Clouet, title title Screw dislocation-carbon interaction in bcc tungsten: an ab initio study, @noop journal journal Acta Materialia volume 200, pages 481 (year 2020)NoStop [Kresse and Furthmüller(1996)]Kresse1996 author author G. Kresse and author J. Furthmüller, title title Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume 54, pages 11169 (year 1996)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]Perdew1996 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop Supplemental Material for: Exploring the parameter dependence of atomic minima with implicit differentiation § IMPLICIT DIFFERENTIATION §.§ General expression derivation Here we consider ^*_ as a stationary configuration that includes the scaled atomic coordinates ^*_ and supercell ^*_. A stationary configuration is defined with a zero-force condition: (^*_; ) = 0. Under a parameter perturbation, + δ, a new stationary configuration ^*_+δ will satisfy: (^*_+δ; +δ) = 0. Hence, under a parameter perturbation, the deferential of the force is zero as well: (^*_+δ; +δ) = δ^*_∇_(^*_; ) + δ·∇_(^*_; ) + 𝒪(δ^2)= 0. We express the variation in coordinates using the implicit derivative definition δ^*_=δ∇_^*_ and obtain δ·[ ∇_^*_∇_(^*_; ) + ∇_(^*_; ) ] = 0 + 𝒪(δ^2). As this holds for any parameter variation δ, the term inside the square brackets is zero. Finally, we will express the force through energy (^*_; ) = -∇_(^*_; ) and get ∇_^*_∇^2_(^*_; ) = - ∇_(^*_; ). Expressing atomic coordinates as = and splitting the position and mixed Hessians on the scale-coordinate and supercell blocks, one gets the equation (5) from the main text. §.§ Homogeneous implicit derivative of strain In this work, we split the full implicit derivative (equation (<ref>)) onto the inhomogeneous and homogeneous contributions. As explained in the main text (Methods A), the inhomogeneous implicit derivative ∇__^* can be computed in with constraint minimization. In this section, we detail our finite difference implementation of the homogeneous part of the implicit derivative, ∇_ϵ^*_. A stationary configuration of a system of volume V^*_ corresponds to zero pressure, i.e. P(V^*_) = - ∂(^*_, ^*_; )/ ∂ V = 0. For isotropic supercell variations, ^*_=[1+ ϵ^*_]_0, this equation can be reformulated in terms of strain: ∂(, ϵ; )/∂ϵ|_^*_, ϵ^*_ = 0. In this section, we neglect the inhomogeneous contribution to the strain derivative and we will omit from the arguments of for clarity. For linear-in-descriptor potentials (e.g. equation (12) in the main text), equation (<ref>) writes ·∂(ϵ)/∂ϵ|_ϵ^*_ = 0. In analogy with the previous section (equation (<ref>)), we consider a system upon a parameter variation +δ: ∂ U(ϵ; + δ)/∂ϵ|_ϵ^*_ + δϵ^*_ = 0. Applying the linear-in-descriptor form of potential energy, we get: (+δ) ·∂(ϵ)/∂ϵ|_ϵ^*_ + δϵ^*_ = 0. Neglecting the term proportional to δδϵ^*_, we get δ·∂(ϵ)/∂ϵ|_ϵ^*_ + δϵ^*_·∂^2 (ϵ)/∂ϵ^2|_ϵ^*_ = 0. We then use the definition of the homogeneous implicit derivative δϵ^*_ = ∇_ϵ_^* δ: δ·[ ∂(ϵ)/∂ϵ + ∇_ϵ_^* ( ·∂^2 (ϵ)/∂ϵ^2) ]_ϵ^*_ = 0. Since this equation is valid for any parameter variation δ, we can get the final expression for the homogeneous implicit derivative: ∇_ϵ_^* = - ∂(ϵ) / ∂ϵ/·∂^2 (ϵ) / ∂ϵ^2|_ϵ^*_. For numerical purposes, we evaluate the derivatives of the descriptor vector with finite differences: ∂(ϵ^*_)/∂ϵ≈(ϵ^*_ + Δϵ)-(ϵ^*_ - Δϵ)/2Δϵ; ∂^2 (ϵ^*_)/∂ϵ^2≈(ϵ^*_ + Δϵ)+(ϵ^*_ - Δϵ)-2(ϵ^*_)/Δϵ^2, where Δϵ is typically 10^-3Å. §.§ Implicit derivative of strain including homogeneous and inhomogeneous terms Let us take into account the inhomogeneous contribution in equation (<ref>): ∂ U(^*_+δ^*_, ϵ^*_ + δϵ^*_; + δ)/∂ϵ = 0. Following a similar procedure as in the previous section, we get the h+ih level of approximation for the strain implicit derivative: ∇_ϵ_^* = - ∂(ϵ^*_) / ∂ϵ + ∇_^*_·∂ / ∂ϵ/·∂^2 (ϵ^*_) / ∂ϵ^2, where the force derivative is similarly evaluated with the finite difference approach. §.§ Taylor expansion of energy In this section, we derive the expansion of potential energy of a stationary system, (^*_; ), due to a perturbation in parameters . For clarity we will omit (^*_; ) from (^*_; ) in this section. One has to account for contributions arising from the explicit dependence of on parameters and changes in the stationary configurations ^*_. Here, we first consider the general atomic coordinates ^* and later split them on scaled coordinates ^*_ and supercell ^*_. Under a parameter perturbation +δ, the potential energy expansion reads (^*_ + δ^*_; + δ) = + δ^*_∇_ + δ∇_ + 1/2δ^*_∇^2_δ^*_^⊤ + 1/2δ∇^2_δ^⊤ + δ∇^2_δ^*_^⊤ + 𝒪(δ^3) The term proportional to ∇_ vanishes since ^*_ is a stationary atomic configuration. For the second-order terms, we express δ^*_ using the implicit derivative: δ^*_ = δ∇_^*_. Referring to equation (<ref>), we further substitute ∇_^*_ and get: δ^*_ =-δ∇^2_[∇^2_]^+. The final energy expansion up to terms 𝒪(δ^3) reads: (^*_ + δ^*_; + δ) = + δ∇_ + 1/2δ[ ∇^2_ + ∇^2_ (∇_^*_)^⊤] δ^⊤. In the main text, we outline the energy expansion as follows: δ^(ζ)^* ≡δ∇_ + δ H_ζδ^⊤ + 𝒪(δ^3), where ζ represents the level of approximation. Considering = and employing the implicit derivative definitions for scaled coordinates and strain, ∇__^* and ∇_ϵ_^* (equations (2)-(5) from the main text), we derive the expressions of H_ζ corresponding to each level of approximation: * constant (c): H_c = ∇^2_ * homogeneous (h): H_h = ∇^2_ + ∇^2_ϵ (∇_ϵ^*_)^⊤ * inhomogeneous (ih): H_ih = ∇^2_ + ∇^2_ (∇_^*_)^⊤. § MEMORY AND TIME EFFICIENCY OF IMPLICIT DERIVATIVE EVALUATION In this section, we provide the details on memory and time efficiency of the automatic differentiation (AD) and implementations of the implicit derivative. Given its computational complexity, our focus here will be on the inhomogeneous contribution, ∇__^*. §.§ Automatic differentiation implementation Here, we use the LJ random fcc alloy (presented in the main text, Results F) as a test system. We compute the implicit derivative with AD using three approaches: 1) Computing the pseudo-inverse of the Hessian matrix with dense linear algebra solution, called dense. 2) Finding ^*_+δ with minimization (gradient descent method was used in this work) and applying AD to the entire pipeline of functions (potential energy and its derivatives) with Python library, called jaxopt. 3) Sparse linear operator technique (Results C in the main text) within the AD framework, called sparse. For this study, we used the best-in-class, 80GB graphic card tailored for scientific computations. To track the usage of the GPU memory for each solver and system size, we used the NVIDIA Management Library, through the Python interface provided by the package. Figure <ref> shows the time (panel a) and memory (panel b) required for inhomogeneous implicit derivative evaluation as a function of number of atoms in the system. The inverse approach shows the best time performance, however, it runs out of 80GB GPU memory at a system of 500 atoms. The second fastest, jaxopt method, allows one to achieve system sizes of up to 2000 atoms before saturating the memory. Lastly, sparse technique, is the most memory efficient and reaches the systems of up to 7000 atoms on a single GPU. We would like to emphasize that while the methods based on AD provide unique advantages such as computational efficiency and ease of implementation, their substantial memory consumption significantly limits their applicability for large-scale simulations. §.§ Implicit derivative implementation using LAMMPS package Here, we discuss the time and memory efficiency of the inhomogeneous implicit derivative implementation using the software. The test system is a vacancy in bcc tungsten with potential (Results G in the main text). We used four CPU nodes with two CPUs and 512 GB of RAM per node. We monitored RAM usage with the tool. We test the efficiency of the dense and sparse approaches presented above. Additionally, we explore the efficiency of the constraint energy minimization approach, called energy, that is described in the main text (Methods A). As seen from Figure <ref>, the energy approach is the most time- and memory-efficient for large systems. Due to the efficient massive parallelization of the software, this method can be effectively scaled and limited only by the amount of available compute resources. § LAMMPS IMPLEMENTATION: ACCURACY AND NUMERICAL DETAILS §.§ Accuracy of position change predictions In this section, we present a comparison of three methods for evaluating the inhomogeneous implicit derivative implemented within the package. The test system is a vacancy in bcc tungsten and potential parameters are set according to the Results G section of the main text, equation (13), (λ,m) = + λ(_m - ), with a strong perturbation of λ=40. The predicted position changes are computed as δ^* pred = ^* + δ∇_^*_, where the inhomogeneous implicit derivative ∇_^*_ is computed with dense, sparse, or energy methods, δ=λ(_m - ), and positions ^* are obtained through the energy minimization of the system with parameters . The true position changes are calculated as follows δ^* true = ^*() - ^*(), where ^*() are the minimized positions of a system with parameters . As demonstrated in Fig. <ref>a, there is a remarkable agreement between the true and predicted positions, with negligible differences among the three computational methods. Given the low computational cost and memory requirements, the energy method stands out as the optimal choice for inhomogeneous implicit derivative evaluation. §.§ Numerical details on LAMMPS constraint minimization Here, we detail our constraint energy minimization approach (energy) to compute the inhomogeneous implicit derivative implemented in . As explained in the main text, the implicit derivative corresponding to a parameter index l is obtained as [∇_^*_]_l = (^*_l,α - ^*_) / α. We have found that the optimal values of α should be computed for each parameter _l separately as follows α(l) = α_0/max(| ∇^2_ |), where α_0 is a constant. Figure <ref>b presents the dependence of the error of the implicit derivative prediction as a function of the α_0. The error is computed as δ^* true - δ^* pred/δ^* true. For α_0<10^-7, the error is large due to limitations in numerical precision. For larger α_0 values, the error does not change. However, the minimization time increases significantly for α_0≥10^-3. Therefore, we conclude that values of α_0∈[10^-6;10^-4] are optimal for the energy method. For the constraint energy minimization, we use the algorithm implemented in . § INVERSE DESIGN §.§ Adaptive step for loss minimization This section describes the adaptive step calculation for the inverse design applications. As explained in the main text (Results H), at iteration k+1 of the minimization procedure, the potential parameters are updated as ^(k+1) = ^(k) - h ∇_ L(^(k)) and positions as ^*_^(k+1) = ^*_^(k) + (^(k+1) - ^(k)) ∇_^*_^(k). Accordingly, the loss at iteration k+1 is L(^(k+1)) = 1/2^*_^(k) + hΔ^(0)_^(k) - ^*^2, where Δ^(0)_^(k) is the change in atomic positions with step h=1. Then, the change in loss at a given iteration is Δ L(^(k+1)) ≡ L(^(k+1)) - L(^(k)) = h Δ^(0) ⊤_^(k)(^*_^(k) - ^*) + 1/2 h^2 Δ^(0)_^(k)^2. Finally, the step h(k) that minimizes the loss at iteration k can be found as h(k) = - Δ^(0) ⊤_^(k)(^*_^(k) - ^*) /Δ^(0)_^(k)^2 . §.§ W-Be POTENTIAL FINE-TUNING Figure <ref> presents the error minimization for the potential fine-tuning for the W-Be system presented in the main text (Results H).
http://arxiv.org/abs/2407.02592v1
20240702182846
Optimized Receiver Design for Entanglement-Assisted Communication using BPSK
[ "Rahul Bhadani", "Ivan B. Djordjevic" ]
quant-ph
[ "quant-ph", "math.OC", "physics.optics" ]
Department of Electrical and Computer Engineering, The University of Alabama in Huntsville, USA rahul.bhadani@uah.edu, rahulbhadani@email.arizona.edu Department of Electrical & Computer Engineering, The University of Arizona, Tucson, USA College of Optical Sciences, The University of Arizona, Tucson, USA ivan@email.arizona.edu § ABSTRACT The use of pre-shared entanglement in entanglement-assisted communication offers a superior alternative to classical communication, especially in the photon-starved regime and highly noisy environments. In this paper, we analyze the performance of several low-complexity receivers that use optical parametric amplifiers. The simulations demonstrate that receivers employing an entanglement-assisted scheme with phase-shift-keying modulation can outperform classical capacities. We present a 2x2 optical hybrid receiver for entanglement-assisted communication and show that it has a roughly 10% lower error probability compared to previously proposed optical parametric amplifier-based receivers for more than 10 modes. However, the capacity of the optical parametric amplifier-based receiver exceeds the Holevo capacity and the capacities of the optical phase conjugate receiver and 2x2 optical hybrid receiver in the case of a single mode. The numerical findings indicate that surpassing the Holevo and Homodyne capacities does not require a large number of signal-idler modes. Furthermore, we find that using unequal priors for BPSK provides roughly three times the information rate advantage over equal priors. Optimized Receiver Design for Entanglement-Assisted Communication using BPSK Ivan B. Djordjevic July 8, 2024 ============================================================================ § INTRODUCTION Quantum Information Processing (QIP) has seen tremendous progress in recent decades, with multiple research directions exploring quantum sensing, covert communication, quantum cryptography, and more. A quantum channel is used to transfer quantum information from one party (known as Alice) to another party (known as Bob). In the case of a perfect channel, the quantum information is transferred intact, but if the channel is noisy, the quantum information undergoes some changes. Quantum channels can also be used to transmit classical information. Additionally, if the channel is noisy within certain limitations, the quantum channel can be used to share entanglement between Alice and Bob. The use of pre-shared entanglement can enhance classical capacity and protect against an adversary, commonly referred to as Eve <cit.>. Recent experiments have shown that even in entanglement-breaking scenarios, the rate of entanglement-assisted (EA) communication can be much higher than communication without entanglement <cit.>. The ratio /C, where is the entanglement-assisted capacity and C is the Holevo-Schumacher-Westmoreland (HSW) capacity in the classical regime, diverges logarithmically with the inverse of the signal power over a lossy and noisy bosonic channel <cit.>. Recent efforts have been made to design receivers for EA communication, where authors have utilized the Gaussian approximation of the cumulative distribution function to calculate the Bit Error Rate (BER) <cit.>. The previously proposed receiver design is limited to a demonstration using Binary Phase-Shift Keying (BPSK) with repetition coding over more than 10^6 bosonic modes that occupy the entire C-band and a portion of the L-band. In this work, we analyze the receiver design for entanglement-assisted (EA) communication using Optical Parametric Amplifiers (OPAs) introduced in <cit.> and expand upon previous results to determine the optimality of the receiver design. We show that EA communication does not need to occupy the entire C-band. Additionally, we analyze a 2x2 optical hybrid-based receiver for EA communication that is suitable for implementation in integrated optics and quantum nanophotonics. In our scheme, optical phase conjugation is performed on the transmitter side when signal photons are brighter, rather than on the receiver side where the signal photons are buried in noise and highly attenuated. A comparison of phase conjugation on the transmitted side versus the receiver side can be found in <cit.>. We further propose an optimized hypothesis testing scheme and demonstrate numerically that the optimized receiver design provides a superior communication capacity compared to capacity without entanglement assistance. When using the BPSK modulation format to represent digital information, we find that non-equal priors perform at least three times better in terms of information rate compared to an equal prior encoding scheme. The development presented in this work is an extension of <cit.>. The rest of the paper is organized as follows. In Section <ref>, we provide a brief review of entanglement-assistance with mathematical formalism necessary for the rest of the paper. In Section <ref>, we present an overview of the receiver design schemes for entanglement-assisted communication, including the optical parametric amplifier-based receiver design with threshold detection, the optical phase conjugation receiver, and the 2x2 optical hybrid-based joint receiver proposed in previous work <cit.>. These receiver designs are then evaluated in Section <ref>. §.§ Notations Used in the Paper |·⟩ is used for ket-notation in quantum mechanics, equivalent to a vector notation in linear algebra. The Hermitian conjugate of the vector, ⟨·|, is referred to as bra-notation. The scalar product of two vectors |ψ_1⟩ and |ψ_2⟩ is denoted by ⟨ψ_1||ψ_2⟩. Additionally, the ket-notation |α⟩ represents a coherent state of amplitude α. The imaginary unit or a complex number √(-1) is represented by j. Random variables X and Y denote the input and detected states, respectively. The measurement operator is represented by Π. Shannon's entropy is denoted by H(·) and mutual information is represented by I(·,·). Probabilities are written as p, while conditional probabilities are represented as p_Y|X and conditioned on Y given X. The binomial coefficient is represented by MN. The tensor product is represented by ⊗ and the cumulative distribution function of a statistical distribution is represented by ℱ. § ENTANGLEMENT ASSISTED CLASSICAL COMMUNICATION CONCEPT Quantum entanglement is a phenomenon where two particles are strongly correlated, such that the state of one particle immediately provides information about the state of the other particle, no matter how far apart they are. These particles, such as photons or electrons, are individual systems, but they remain connected even when separated by vast distances, forming a composite system <cit.>. As an example, given two basis vectors {|0⟩_A,|1⟩_A } in Hilbert space _A and {|0⟩_B, |1⟩_B } in Hilbert space _B, then the following is an entangled state: 1√(2) ( |0⟩_A ⊗|1⟩_B - |1⟩_A ⊗|0⟩_B ) When a composite system is in the state (<ref>), it is impossible to attribute either system A or B a definite pure state. Although the von Neumann entropy of the whole state is zero, the entropy of the subsystem is greater than zero, indicating the systems are entangled. Compared to classical communication, entanglement enhances communication by increasing the number of messages that can be sent perfectly over the channels, resulting in higher one-shot zero-error capacity and increased security <cit.>. However, <cit.>doesn't explain what kind of measurement device and receiver scheme the experimentalists used. Theoretical proofs and discussions of entanglement-assisted communication can be found in <cit.>. A laboratory experiment demonstrating the superiority of entanglement-assisted communication was recently conducted in <cit.>. In entanglement-assisted classical communication, entangled states can be distributed through either optical fibers or satellites and stored in quantum memories. The classical data is transmitted by Alice using the signal photon of the entangled pair, which is affected by noise and loss in the quantum channel. On the receiver side, Bob uses the idler photon of the entangled pair to determine what was transmitted by employing an optimal quantum receiver. The overall design is illustrated in Figure <ref>. Error correction can be applied to the quantum states to restore the transmitted information and mitigate the effects of decoherence. § RECEIVER DESIGN FOR EA COMMUNICATION In entanglement-assisted communication, two-mode Gaussian states are generated through spontaneous parametric down-conversion (SPDC) of entangled-photon pairs <cit.>. The SPDC source is a broadband source with a number of modes M = T_m W, where W is the phase-matching bandwidth and T_m is the measurement interval and generates M independent pairs of signal-idler photons in space and time denoted by their annihilation operators ^(m), ^(m), with m ∈ [1, M]. These pairs are prepared in identical entangled two-mode squeezed vacuum (TMSV) states, which can be represented in a Fock state basis as in Equation (<ref>) |ψ⟩_si = ∑_n=0^∞√(N_s^n(N_s +1)^n+1 )|n⟩_s|n⟩_i where N_s is the mean photon number in the signal mode. The mean photon number for the idler mode is N_i = N_s <cit.>. TMSV belongs to a class of Gaussian states, where an M-mode Gaussian state ρ̂ consisting of modes ^(m), m ∈ [1, M] is characterized by the mean and variance of their respective quadrature field operators such that ^(m) = ^(m) + j ^(m). The covariance matrix for a TMSV state is given by Λ_TMSV = 2N_s + 1 0 C 0 0 2N_s + 1 0 -C C 0 2N_s + 1 0 0 -C 0 2N_s + 1 = (2N_s + 1) C C (2N_s + 1) where C = 2√(N_s (N_s +1)), and are 2× 2 Pauli matrices. Other two Pauli matrices are and [<https://qiskit.org/documentation/stubs/qiskit.quantum_info.Pauli.html>] <cit.>. If we consider the Phase-Shift Keying (PSK) modulation scheme for communication, then mathematically, we can use the unitary operator _θ = e^jθ^† to denote the rotation of the base annihilation operator . In transmitting information using entangled photons generated from SPDC, the signal photon of the signal-idler pair is used while the idler is pre-shared before transmission occurs. In order to transmit information, Alice modulates the signal _s' using a phase modulator to apply a rotation of θ. The signal then passes through a thermal, lossy Bosonic quantum channel. The received photon mode (after passing through the communication channel) at Bob's end is denoted by _R = _R'e^jθ where _R' is the base photon mode at the receiving end. Bob uses the undisturbed idler part of the pre-shared entangled photon pair and an optimal quantum detector to perform hypothesis testing and determine which symbol was transmitted. For simplicity, we will drop the mode notation from the annihilation operator. Under the phase-encoding scheme, the covariance matrix of the return-idler pair _R, _I is given by (2N_R + 1) C_ηRe[e^jθ (- j)] C_ηRe[e^jθ (- j)] (2N_I + 1) where N_R=η N_s+N_B, C_η = 2√(η N_s (N_s +1)), η is the transmittivity of the Bosonic channel, and N_B is the mean photon number of the thermal mode. In the case of a pre-shared entangled state, the idler is assumed to be undisturbed as it has been shared through fiber optics or satellite and is stored in quantum memory. In such a case, attenuation experienced by the idler is negligible. Hence, at the receiver side, the idler mean photon number N_I = N_i = N_s. As the signal mode passes through a thermal lossy bosonic channel, the signal mode is altered and referred to as the return mode on the receiver side with mean photon number N_R. §.§ OPA-based receiver with threshold detection A joint detection receiver for state discrimination of EA communication consists of an optical parametric amplifier (OPA). On the receiver side, an optical parametric amplifier (OPA) is used to combine the return-idler pair, as shown in Figure <ref>. The return and idler modes are evolved as given by Heisenberg's picture <cit.>: = √(G)_R + √(G-1)_I^† = √(G)_I + √(G-1)_R^† where G is the gain of the OPA, such that G = 1 + ϵ and ϵ << 1. OPA receiver can be used to combine and amplify the return-idler pair using a strong local pump. This gives rise to Equation (<ref>). At the output ports, a photodetector is used for photon counting, and a threshold detection rule is applied to make state discrimination. We further assume an ideal OPA, where the gain G is fixed. The photodetector outputs are designated as and at two ports, referred to as the return and idler outputs, respectively. For each output, the mean photon number is given by the expectation ^† or ^†, depending on whether threshold detection is made at the signal output port or idler output port. The photocurrent operators and their expectations are given by Equation (<ref>) and Equation (<ref>). _1(θ) = ^† = G(ηN_s + N_B) + (G-1)(1 + N_s) + 2cosθ√(G(G-1))√(ηN_s (N_s + 1)) _2(θ) = ^† = GN_s + (G-1)( 1 + ηN_s + N_B) + 2cosθ√(G(G-1))√(ηN_s (N_s + 1)) The derivation is provided in Appendix <ref>. For practical communication, consider that information is encoded using repetition codewords that employ binary phase-shift keying (BPSK) modulation with phases θ∈0, π. Decoding BPSK can be modeled as hypothesis testing: if hypothesis H_0 is true, then the BPSK symbol with θ = 0 was transmitted, and if hypothesis H_1 is true, then the symbol with θ = π was transmitted. In this paper, we do not discuss optimal encoding, which is beyond the scope of this paper. However, BPSK is a suitable choice for weak signals, as it is power-efficient <cit.>. To allow for efficient error correction, repeated PSK codewords consisting of M signal-idler pairs are used in EA communication <cit.>. In a joint-detection scheme, the receiver mixes all M received modes and counts the total number of photons at the output ports. The joint detection state in this case becomes an M-fold tensor product ρ^⊗ M, with identical zero-mean thermal states, and the per-mode mean photon number is given by _1(θ) or _2(θ), depending on which output port of the OPA we use. An optimum joint measurement for state discrimination requires photon counting at an output port and thus deciding between two hypotheses using the total photon number N over M modes <cit.>. Under such a scenario, the probability mass function (pmf) is negative binomial with mean M(θ) and standard deviation σ(θ) = √(M(θ)((θ) + 1)), given by <cit.>: P_OPA(n|θ; M, i) = n+M-1n(_i(θ)1 + _i(θ))^n (11 + _i(θ))^M where i ∈1, 2, and n+M-1n is the binomial coefficient. Equation (<ref>) can be approximated as a Gaussian distribution with mean M_i(θ) and standard deviation σ(θ)=√(M_i(θ)(_i(θ)+1)) for sufficiently large M (see Appendix <ref>). At the detector end, we use threshold detection and decide in favor of H_0 if the total number of photons detected is N>(θ), otherwise we choose H_1 for N≤(θ), where (θ) is the threshold number of photons, which is a function of the phase θ. A suitable value for the threshold number of photons is chosen according to the scheme described later in Section <ref>. §.§ Optical Phase Conjugation Receiver with Threshold Detection OPA can also be used differently, where the return _R mode interacts with the vacuum mode _v to produce √(G)_v + √(G-1)_R^†, which becomes _c=√(2)_v + _R^† for G=2. By mixing the idler with _c using a 50-50 beamsplitter, we get two modes 1√(2)(_c±_I). The outputs from the two arms are fed to a balanced detector, and their difference is measured as a photocurrent. We call this the Optical Phase Conjugate Receiver (OPC receiver). Consider the schematic of the OPC receiver shown in Figure <ref>. For the case of BPSK, the mean photon operators of two output arms of beamsplitters are given by _A/B^†_A/B = 12[ (G-1)_R _R^†±√(G-1)_R_I ±√(G-1)_I^†_R^†+ _I^†_I ] with + sign for arm A and - sign for arm B. In Equation (<ref>), _v term doesn't appear as it denotes the vacuum mode. We adopt a joint-detection scheme similar to the one adopted for the OPA receiver discussed in Section <ref>, containing M modes for error correction. The difference in the mean photon number detected at the two photodetectors of the OPC is converted to a photocurrent with a photocurrent operator given by Equation (<ref>), setting G = 2. = _A^†_A - _B^†_B = √(G-1)_R_I + √(G-1)_I^†_R^† N_OPC(θ) = = 2cosθ√(ηNs(Ns + 1)) as,  _R = _R'e^jθ,  _R' _I = √(ηN_s (N_s + 1)), _R _R^†= _R^†_R + I The variance σ^2_OPC is given by Equation (<ref>), setting G = 2. σ^2_OPC(θ) = ^2 - ^2 = N_s(ηN_s + N_B + 1 ) + (N_s + 1)(ηN_s + N_B + 1) - 2 (ηN_s (N_s + 1 ) ) cos2θ- 4 ( ηN_s (N_s + 1) ) cos^2θ At the detector end, the decision scheme uses threshold detection, similar to the OPA-based receiver design discussed in Section <ref>. §.§ 2x2 Optical Hybrid-based Joint Receiver with Threshold Detection In this section, we describe a practical receiver design using a 2x2 optical hybrid for EA communication. An optical hybrid-based joint detection scheme is suitable for EA communication as it can be directly implemented in integrated optics and quantum nanophotonics. For a two-dimensional constellation, a 2x2 optical hybrid receiver can be used, as shown in Figure <ref>. A detailed discussion of the optical hybrid receiver design can be found in <cit.>, where Gaussian modulation has also been discussed. The scattering matrix of the 2x2 optical hybrid is described by Equation (<ref>) = e^jϕ_1√(1-κ) √(1-κ) √(1-κ) e^jϕ_2√(κ) where κ is the power-splitting ratio of Y-junction in a 2x2 optical hybrid; and ϕ_1, and ϕ_2 are phase shift parameters <cit.>. Return and idler at the receiver are transformed based on the scattering matrix given in Equation (<ref>). Â_R Â_I = â_R â_I . We consider equal power splitting set by κ = 0.5 and write the scattering matrix as Â_R Â_I = 1√(2)e^jϕ_1 1 1 e^jϕ_2 â_R â_I For BPSK, _R = _R'e^jθ with θ∈{0,π}. The photocurrent operator is given by _OH = 12e^-jθ ( e^-jϕ_1 - e^jϕ_2 ) _R'^†_I + 12e^jθ ( e^jϕ_1 - e^-jϕ_2 ) _I^†_R' The expectation of photocurrent is given by N_OH = _OH = 12e^-jθ√(ηN_s (N_s + 1))( e^-jϕ_1 - e^jϕ_2 ) + 12e^jθ√(ηN_s (N_s + 1)) ( e^jϕ_1 - e^-jϕ_2 ) In this paper, we consider a special case of 2x2 optical hybrid receiver where ϕ_1 = 0 and ϕ_2 = π, for which, N_OH = 2√(η N_s (N_s + 1))cosθ. The variance of the photocurrent operator is given by σ^2_OH = _^2 - _^2 = 14|e^jϕ_1 -e^-jϕ_2|^2 (2 N_R N_I + N_R + N_I ) + ηN_s (N_s + 1)4[ ( e^-2jθ - e^-2jθ) ( e^-jϕ_1 - e^jϕ_2 ) ^2] + ηN_s (N_s + 1)4[ ( e^2jθ - e^2jθ) ( e^jϕ_1 - e^-jϕ_2 ) ^2] - ηN_s (N_s + 1)2 e^-jθe^jθ | e^jϕ_1 - e^-jϕ_2|^2 For equal prior BPSK symbols, e^± 2jθ = (e^± 2j· 0 +e^± 2j·π)/2 = 1. For non-equal prior symbols with priors p_0 and p_1, e^± 2jθ is calculated as p_0 e^± 2j · 0 +p_1e^± 2j·π which is still 1. Further, regardless of phase value θ∈{0,π} for BPSK symbols, e^± 2jθ = cos 2θ. Putting these values in Equation (<ref>), and considering special case of ϕ_1 = 0 and ϕ_2 = π, the variance for BPSK is σ^2_OH(θ) = (2 N_R N_I + N_R + N_I ) + 2ηN_s (N_s + 1)(1-cos2θ) -2ηN_s (N_s + 1) where N_R = η N_s + N_B and N_I = N_s. Similar to OPA and OPC receiver design, the decision scheme uses threshold detection for state discrimination. Since the target of this paper is a highly noisy and lossy environment, we choose N_B = 1, N_s = 0.01, η = 0.01, and G = 1.1 as a representative of such a condition. § EVALUATION OF ENTANGLEMENT-ASSISTED COMMUNICATION RECEIVERS §.§ Error Probability Calculation The probability of error of state discrimination for the case of BPSK using OPA is given by P_E = p_0 P_OPA(n < |θ=0; M,i) + p_1 [ 1 - P_OPA(n < |θ=π; M, i) ]. An optimum value of can be found by equating individual error term in Equation (<ref>) which, for case of symbols with equal priors gives us (θ) = M (σ(π)(0) + σ(0)(π))(σ(π) + σ(0)). Derivation of the optimum threshold for OPA is provided in Appendix <ref> that uses Gaussian approximation. However, for unequal priors, the optimum threshold is the one that satisfies the condition p_0 P_OPA (n < |θ=0; M, i) = p_1 [ 1 - P_OPA(n < |θ=π; M, i)]. We solve Equation (<ref>) for using grid search procedure and plug into Equation (<ref>) to calculate the error probability. In such a case, there is no closed-form solution. The joint detection can be made either at the idler output port or the signal output port. The error probability of discrimination is higher at the return port compared to detection made at the idler port, as shown in Figure <ref>. As a result, our further analysis focuses solely on making joint detection at the idler port and we drop the index i from the probability notation moving forward. We find that for the case of non-equal priors, the mean threshold photon number for BPSK discrimination is higher for any detection made at the return output port than at the idler output port (see Figure <ref>). Note that even though the error probability P_E in Equation (<ref>) is a convex function of , it is a monotonic function of the prior p_0, as shown in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>. Hence, there does not exist an optimum prior that minimizes the probability of error for state discrimination. For the OPC receiver, we calculate the error probability by taking a Gaussian approximation of photodetection statistics similar to Equation (<ref>). This is because we measure the difference of photocurrent obtained at the two arms of the beamsplitter at the detection side (as shown in Figure <ref>). The Gaussian approximation yields the probability of error formula given in Equation (<ref>). P_E = p_0 _( , M·N_(0), √(M)·σ_(0)) + p_1 [ 1 - _( , M·N_(π), √(M)·σ_(π)) ] In Equation (<ref>), N_(θ) and σ_ are given by Equations (<ref>) and (<ref>) respectively. _ is cumulative distribution function of a Gaussian distribution with mean M· N_(θ) and standard deviation √(M)·σ_. Equation (<ref>) is similar to Equation (<ref>) but written explicitly using the cumulative distribution function (CDF) notation. Like the OPA receiver, we can calculate the optimum by equating the two terms of Equation (<ref>). From Figure <ref>, we see that the OPC receiver's performance in terms of error probability in discriminating BPSK symbols is better than that of the OPA receiver. However, for a low number of modes M, OPA receivers with non-equal priors still perform better than OPC receivers with equal priors and perform similarly to OPC receivers with non-equal priors. Our evaluation suggests that lower-complexity receivers like OPA receivers with fewer optical components can provide superior information retrieval with a suitable choice of prior. The error probability of a 2x2 optical hybrid can be calculated using a formula similar to the one in Equation (<ref>) with means and variances from Equations (<ref>) and (<ref>), respectively. From the error probability plot in Figure <ref>, we see that the 2x2 optical hybrid offers a roughly 10% improvement in BPSK state discrimination compared to the OPC receiver. §.§ Mutual Information Calculation The Holevo capacity<cit.> quantifies the maximum amount of information, in bits per channel use, that can be sent over a quantum channel when the use of entangled states at the input and arbitrary measurements at the output are permitted. In the situation under consideration in this work, the Holevo capacity evaluates to Equation (<ref>) C = g(ηN_s + N_B) - g(N_B) where g(n) = (n+1)log_2(n+1) - nlog_2 (n) is the entropy of the thermal state with mean photon number n. To write the mutual information and, in turn, the capacity for entanglement-assisted classical communication that requires symbol-by-symbol joint detection, we are required to calculate the conditional probability distribution. Assuming that the random variable X denotes the transmitted symbols and Y denotes the detected symbols, we calculate the mutual information as follows: we first calculate the conditional probabilities to complete the transition matrix. Using the conditional probabilities, we can calculate the posteriors, which are then used to calculate the conditional entropies, followed by the calculation of the mutual information. The steps to calculate the mutual information are provided in Equation (<ref>) in Appendix <ref>. The Shannon's capacity for transmitting classical information with our EA receivers can be calculated by taking the maximum of mutual information over prior p and threshold mean photon number , i.e. C_EA = max_p, I(X;Y) We find that symbols with equal priors maximize the mutual information, as expected. We conducted a simulation study with a varying number of modes M to optimize the mutual information as a function of the signal mean photon number transmitted over a noisy Bosonic channel with N_B = 1 and transmittivity η = 0.01. In Figure <ref> and Figure <ref>, we present a comparison of the capacities of various receiver designs proposed for the BPSK constellation using Equation <ref>. At the same time, we also plot the capacity of the Homodyne receiver, where the average number of photons received is 4η N_s and the average number of noisy photons is 2N_B + 1. The capacity of a Homodyne receiver is given by C_H = 0.5log_2[1 + 4η N_s/(2N_B + 1)] <cit.>. As a reference, we also plot the Holevo capacity, given by Equation (<ref>). Note that the Holevo capacity requires coherent states with Gaussian modulation. For EA communication with the BPSK constellation, we find that joint receivers based on OPA and OH proposed in this paper outperform the Holevo capacity, even for a single mode, as shown in Figure <ref> and Figure <ref>. We also conducted simulation studies for a large number of modes. From our analysis and results, shown in Figure <ref>, we conclude that for the proposed EA receiver design employing a joint-detection scheme, a large number of signal-idler modes is not required. For M=1, the OPA receiver's performance is better than the OPC receiver and 2x2 optical hybrid receiver. C-band operates at 35nm, hence dλ = 35nm. The central wavelength, λ = 1550 nm. The typical observation interval of symbols is 1μ s. Phase matching bandwidth of the C-band is calculated as B = cλ^2dλ= 3×10^8(1550×10^-9)^2 ·35×10^-9 = 4.3704 ×10^12 Then the number of bosonic mode, M for C-band is 4.3704 × 10^12· 1 μ s = 4.3704 × 10^6. Hence if we were to use the number of modes in the order of 10^6, we would end up using C-band as was demonstrated in <cit.>. However, as we have shown above, achieving a superior performance requires as little as M = 1 if we choose optimal values of prior and threshold mean photon. We also compared our capacities with ultimate bound, i.e., entanglement-assisted classical capacity C_ultimate as described in <cit.>. To calculate the C_ultimate, we adopted Equation (<ref>) from <cit.> with suitable modifications as per the use case described in Section <ref>. Entanglement-assisted classical capacity is given by C_ultimate = g(N_s) + g(N_R) - ( g( ν_+ - 12 ) + g( ν_- - 12 ) ) with a = 2N_s + 1, b = 2N_R + 1, C_η = 2√(η N_s (N_s +1)), ν_± = [√((a+b)^2 - 4C_η^2)± (b - a)]/2, and g() has been defined in Equation (<ref>). From Figure <ref>, it is evident that further development in the receiver design is required to achieve performance closer to the ultimate bound of the capacity given by Equation (<ref>). For a large number of modes, the 2x2 optical hybrid receiver performs better in terms of capacity as shown in Figure <ref>. Previous work <cit.> in this direction have not considered the use of OH receivers. An OH receiver uses a balanced detector, similar to OPA to distinguish the modulation which is more practical and efficient. Further, OH is known to suppress noise as discussed in <cit.>. It could be argued that the capacity should be divided by the number of modes for a scheme using multiple modes. However, in this paper, we are talking about the overall receiver design's capacity, rather than bits per mode. In addition, we find that a number of modes greater than 1 may not be needed to outperform the Holevo capacity, as seen from Figure <ref>. Thus, for a well-designed receiver, repetition coding may not always be useful. This claim is further corroborated by Figure <ref>. Additionally, we also plot the per-mode communication rate R, normalized by the Holevo capacity for classical communication C, in Figure <ref>. The communication rate R is given by Equation (<ref>). R = 1 + P_elog_2(P_e) + ( 1 - P_e)log_2(1 - P_e)M. Equation (<ref>) is based on symmetric hypothesis testing. We find that in terms of the normalized communication rate, the OPA and OPC receivers perform almost three times better in the photon-starved regime when BPSK symbols with non-equal priors are used, compared to when BPSK symbols are equally likely. At the same time, the 2x2 optical hybrid receiver for non-equal priors performs roughly 2.5 times better, compared to BPSK with equal priors in the photon-starved regime. Furthermore, the 2x2 optical hybrid receiver can outperform an OPA-based receiver by as much as 30% in terms of information rate. Finally, it's worth noting that a large number of modes does not necessarily equate to superior performance. §.§ Discussion It is expected that the order of error probability of two receiver designs should be the reverse of the order of capacity when compared. We find that this is not the case for OPA, OPC, and OH as evident from Figure <ref> as well as Figure <ref> and Figure <ref>. The ordering in the error-probability plot and capacity plot is only related when mutual information is optimized with respect to the prior only. However, we optimize mutual information with respect to the prior as well as threshold mean photon number as shown in Equation <ref>. Hence, the usual ordering relationship is no longer applicable. Further, as the number of modes increases, the probability distribution seen in Equation <ref> resembles more and more Gaussian. Thus the ordering of capacity is altered as we move from the number of modes M = 1 to higher modes. Thus, with the higher number of modes, the OH receiver design has the best performance, followed by OPC and then the OPA receiver. However, with M=1, the best performance is obtained by using the OPA receiver, followed by the OH receiver and then the OPC receiver. § CONCLUDING REMARKS AND FUTURE WORKS Entanglement is a unique phenomenon in quantum information science that can be leveraged to design new types of sensors, allowing computing devices to solve problems that are intractable for conventional computers. In communication systems, the use of entanglement assistance offers a unique advantage in terms of providing a better communication rate in low-photon number regimes. Pre-shared entanglement can be used to surpass the performance of classical capacities and the Holevo capacity in highly noisy and low-brightness conditions. However, there are several challenges in terms of the practical realization of entanglement, such as: (i) transmitting entanglement over long distances is challenging, and (ii) the optimum quantum receiver to achieve entanglement-assisted channel capacity has not yet been derived. Nevertheless, simulation results indicate that even when entanglement is not perfect, entanglement-assisted (EA) communication based on signal-idler pairs outperforms the Holevo capacity and the capacities of classical channels. In this paper, we analyze several low-complexity receiver designs employing optical hybrids and balanced detectors. We demonstrate that for BPSK modulation, a 2x2 optical hybrid-based joint detection can outperform the OPA and optical phase-conjugation receivers. Numerical results demonstrate that we do not need a large number of signal-idler modes to outperform the Holevo and Homodyne capacities. § DISCLOSURE The authors declare no conflicts of interest. § APPENDIX § GAUSSIAN APPROXIMATION TO NEGATIVE BINOMIAL PHOTON STATISTICS Although the photodetection statistics given by OPA receivers in Equations (<ref>) are of negative binomial nature, they can be computationally expensive to calculate for large values of M. By recognizing that Equation (<ref>) contains cumulative distributions and approximating them as Gaussian distributions, we can rewrite the equation as the error function (erf), which is commonly used to write the cumulative distribution function of a Gaussian distribution: 12(1 + erf(- M ·(0)√(2)√(M)σ(0))) = 1 - 12(1 + erf(- M ·(π)√(2)√(M)σ(π))) ⇒12 +12 erf(- M ·(0)√(2)√(M)σ(0)) = 12 - 12 erf(- M ·(π)√(2)√(M)σ(π)) ⇒erf(- M ·(0)√(2)√(M)σ(0)) = - erf(- M ·(π)√(2)√(M)σ(π)) Considering that erf(-x) = -erf(x) and equating the arguments of erf (θ) = M (σ(π)(0) + σ(0)(π))(σ(π) + σ(0)). However, we should be aware of how we may misinterpret the true performance of receivers due to approximation. In Figure <ref>, we plot the difference between C_Gaussian and C_NB. C_Gaussian represents the capacity of the OPA receiver as discussed in Section <ref>, where the photodetection statistics are approximated as Gaussian. C_NB represents the capacity using the exact negative binomial distribution from Equation (<ref>). Our calculations have led us to the following observations: (i) the error of the approximation increases as the signal mean photon number, N_s, increases; (ii) with the Gaussian approximation, the capacity of the channel is overestimated compared to its true value; (iii) as the number of modes increases, the error of the approximation decreases. These observations are depicted in Figure <ref>. Although the Gaussian approximation overestimates the capacity, the error is of the order of 10^-3, which is small compared to the value of the capacity and enables faster numerical calculations. § DERIVATION OF MEAN PHOTON NUMBER FOR OPTICAL PARAMETRIC AMPLIFIER In this section, we derive in detail, the mean photon number for OPA using Equations from Section <ref>. ^† =( √(G)_R ^†+ √(G-1)_I) ( √(G)_R + √(G-1)_I^†) =G_R^†_R + √(G(G-1))_R^†_I^†+ √(G(G-1))_I_R + (G-1)_I_I^† =G_R^†_R + (√(G(G-1)))(_R^†_I^†+ _I_R) + (G-1)_I_I^† ^† = G_R^†_R + (√(G(G-1)))(_R^†_I^† + _I_R) + (G-1)_I_I^† = GN_R + √(G(G-1))(e^jθ +e^-jθ )√(ηN_s(N_s + 1)) + (G-1) ( 1 + N_I) As,  _R = _R'e^jθ, _R' _I = √(ηN_s (N_s + 1)),  _I _I^†= _I^†_I + I, (from the commutative property of annihilation and creation operators) Further, N_R = ηN_s +N_B after passing through a channel     with mean thermal photon number N_B N_I = N_s,  As,   idler is per-shared _1(θ) = ^† = G(ηN_s + N_B) + (G-1)(1 + N_s) + 2cosθ√(G(G-1))√(ηN_s (N_s + 1)) Similarly, ^† = ( √(G)_I^†+ √(G-1)_R)(√(G)_I + √(G-1)_R^†) ^† = GN_I + 2cosθ√(G(G-1))√(ηN_s (N_s + 1)) + (G-1) ( 1 + N_R) _2(θ)= ^† = GN_s + (G-1)( 1 + ηN_s + N_B) + 2cosθ√(G(G-1))√(ηN_s (N_s + 1)) § MUTUAL INFORMATION CALCULATION In this section, at a very high level, we provide a calculation of how mutual information can be calculated. The values of P_OPA can be plugged from Equation <ref>. We can first write the conditional probabilities, assuming that the random variable X denotes the transmitted symbols and Y denotes the detected symbols: p_y|x(Y= 0 | X = 0) = 1 - P_OPA(n < |θ= 0; M) p_y|x(Y= 1 | X = 1) = P_OPA(n < |θ= π; M) p_y|x(Y= 0 | X = 1) = 1- P_OPA(n < |θ= π; M) p_y|x(Y= 1 | X = 0) = P_OPA(n < |θ= 0; M) p_y(Y=0) = p_0 p_y|x(Y= 0 | X = 0) + p_1 p_y|x(Y= 0 | X = 1) p_y(Y=1) = p_0 p_y|x(Y= 1 | X = 0) + p_1 p_y|x(Y= 1 | X = 1) Using conditional probabilities, we can obtain mutual information as follows. H(Y|X = 0) = - p_y|x(Y= 0 | X = 0) log_2(p_y|x(Y= 0 | X = 0)) - p_y|x(Y= 1 | X = 0) log_2( p_y|x(Y= 1 | X = 0)) H(Y|X = 1) = - p_y|x(Y= 0 | X = 1) log_2( p_y|x(Y= 0 | X = 1) ) - p_y|x(Y= 1 | X = 1) log_2(p_y|x(Y= 1 | X = 1)) H(Y|X) = p_0 H(Y|X = 0) + p_1 H(Y|X = 1) H(Y) = - p_y(Y=0) log_2(p_y(Y=0) ) - p_y(Y=1)log_2(p_y(Y=1)) I(X;Y) = H(Y) - H(Y|X) The mutual information can be used to calculate the capacity using Equation <ref>.
http://arxiv.org/abs/2407.02704v1
20240702225643
Monads, Comonads, and Transducers
[ "Rafał Stefański" ]
cs.FL
[ "cs.FL", "F.4.3" ]
WARNING This Contains Misinformation: The Effect of Cognitive Factors, Beliefs, and Personality on Misinformation Warning Tag Attitudes Michael Haupt Received: date / Accepted: date ======================================================================================================================================= § ABSTRACT This paper proposes a definition of recognizable transducers over monads and comonads, which bridges two important ongoing efforts in the current research on regularity. The first effort is the study of regular transductions, which extends the notion of regularity from languages into word-to-word functions. The other important effort is generalizing the notion of regular languages from words to arbitrary monads, introduced in <cit.> and further developed in <cit.>. In the paper, we present a number of examples of transducer classes that fit the proposed framework. In particular we show that our class generalizes the classes of Mealy machines and rational transductions. We also present examples of recognizable transducers for infinite words and a specific type of trees called terms. The main result of this paper is a theorem, which states the class of recognizable transductions is closed under composition, subject to some coherence axioms between the structure of a monad and the structure of a comonad. Due to its complexity, we formalize the proof of the theorem in Coq Proof Assistant <cit.>. In the proof, we introduce the concepts of a context and a generalized wreath product for Eilenberg-Moore algebras, which could be valuable tools for studying these algebras. § INTRODUCTION The study of transductions plays an important role in studying and understanding the theory of regularity. Although this idea is not new, and its importance has been known for decades (see the first paragraph of <cit.>), it seems to have been gaining momentum in the recent years (e.g. see <cit.>). citations This paper aims to extend the concept of languages recognized over monads, introduced by <cit.>, to transductions. Interestingly, this requires studying functors that are simultaneously equipped with the structures of both a monad and a comonad. Although, this work is clearly inspired by category theory, our primary focus lies within the domain of formal languages and transducers. For this reason, we do not assume any prior knowledge of category theory on the part of the reader. We will provide all necessary definitions and limit our discussion to the basic category , which consists of sets and functions between them. For a discussion about extending this work to other categories, see <ref>. The paper is structured as follows: In <ref>, we summarize the results on languages recognizable over monads (based on <cit.>), which serves as a context for this paper. In <ref>, we introduce transductions recognizable over monads and comonads, which is the main contribution of this paper. In <ref>, we show that our proposed classes of transducers are closed under compositions, subject to certain coherence axioms. This serves two purposes: First, it serves as a validation for our class of transductions. Second, it facilitates a deeper understanding of the structure of the monad/comonad functors and their Eilenberg-Moore algebras. Finally, in <ref>, we outline the potential directions for further work. Let us also mention that some of the proofs (including the proof of the compositions theorem) are verified in the Coq Proof Assistant <cit.> (see <ref>). *Related work A recently published work <cit.> also presents a categorical framework for defining transductions over monads. Our paper differs from <cit.> in two important ways: First, <cit.> focuses on generalizing the class of regular functions (i.e. those recognized by two-way transducers), whereas this paper concentrates on generalizations of Mealy machines and rational functions (see also Item 1 in Section <ref>). Second, the two papers present different approaches to the problem. In particular, our paper develops a comonadic framework that does not appear in <cit.>. We believe that both the approaches are valuable and warrant further investigation. Future research could explore potential connections and synergies between the two methodologies. § MONADS AND RECOGNIZABLE LANGUAGES In this section we present a brief summary of the existing research on recognizable languages over monads, which is the starting point for this paper. This line of research was initiated in <cit.>, and then continued in <cit.> and in <cit.>. The main idea is to approach regular languages from the algebraic perspective, and then use Eilenberg-Moore algebras to generalize their definition from languages over words to languages over arbitrary monads (such as infinite words, trees, or even graphs). We start the summary with the following, well-established definition of monoid recognizability: A monoid is a set equipped with a transitive binary operation and an identity element. We say that a language L ⊆Σ^* is recognizable by a monoid if there exist: M_Monoid h : Σ→ M_Input function F ⊆ M_Accepting set, such that a word w_1 … w_n ∈Σ^* belongs to L, if and only if: h(w_1) · h(w_2) ·…· h(w_n) ∈ F It is a well-know fact, that the class of languages that can be recognized by finite monoids is equal to regular languages (see <cit.> for details). As mentioned before, the key idea presented by <cit.> is extending <ref> from languages of words, to languages of arbitrary monads. Before we show how to do this, we need to define monads. However, since monads are a special kind of functors, we need to start by giving a definition of a functor[We only define a special case endofunctors in , as those are the only type of functors that we are going to use. See <cit.> for a general definition.]: A functor M consists two parts. The first part is a mapping from sets to sets, i.e. for every set X, the functor M assigns another set denoted as M X. The other part is a mapping from functions to functions, i.e. for every function f : X → Y, the functor M assigns a function M f : M X → M Y. Moreover, the function mapping needs to satisfy the following axioms (where 𝕀_X is the identity function on X, and ∘ is function composition): M 𝕀_X = 𝕀_M X M (f ∘ g) = (M f) ∘ (M g), For example, let us show how to apply this definition to finite lists: The finite lists functor maps every set X into X^* (i.e. the set of finite lists over X), and it maps every function f : X → Y into a function f^* : X^* → Y^* that applies f element by element. It is not hard to see that this satisfies the axioms from <ref>. We are now ready to give the definition of a monad: A monad is a functor M equipped with two operations; η_X : X → M X and μ_X : M M X → X We are going to refer to η as the singleton operation, and to μ as the flatten operation. The two operations need to satisfy the axioms of a monad, which can be found in Section <ref> of the appendix. Let us now show how to apply this definition to finite lists: The functor of finite lists can be equipped with the following monad structure. The singleton operation η : X → X^* returns a singleton list with the argument, and the flatten operation μ : M M X → M X flattens the list of lists into a single list. For example: η(3) = [3] and μ([[1, 2, 3], [4, 5, 6], [], [7,8], [9]]) = [1, 2, 3, 4, 5, 6, 7, 8, 9] From the perspective of <cit.>, the most important feature of monads is that they can be used to define Eilenberg-Moore algebras, which can be seen as a generalization of monoids: An Eilenberg-Moore algebra for a monad M is a set S together with a multiplication function ∏ : M S → S, that makes the following diagrams commute[ We hope that the notation of commutative diagrams is intuitively clear. See <cit.> for more explanation. ]: M M S M S S M S M S S S ["μ_S", from=1-1, to=3-1] ["M ∏"', from=1-1, to=1-4] ["∏", from=3-1, to=3-4] ["∏"', from=1-4, to=3-4] ["id_S"description, from=1-6, to=3-8] ["η_S", from=1-6, to=1-8] ["∏", from=1-8, to=3-8] To understand the intuition behind this definition, let us show that there is a bijective correspondence between Eilenberg-Moore algebras for finite lists and monoids: Let (X, ∏) be an Eilenberg-Moore algebra for the finite list monad. We can use ∏ to define a binary operation and an identity element on X, obtaining a monoid structure on X: x · y = ∏([x, y]) and 1_X = ∏([ ]), To see that this is a valid monoid observe that: x · (y · z) def=∏([x, ∏([y, z])]) Ax.η=∏([∏([x]), ∏([y, z])]) Ax.μ=∏(μ [[x], [y, z]]) = ∏([x, y, z]) , where Ax.μ and Ax.η denote the Eilenberg-Moore axioms from <ref>. Using a similar reasoning one can show that (x · y) · z = ∏([x, y, z]), which means that the new binary operation is associative. Similarly, one can show that 1_X is indeed an identity element. To see that this defines a bijection between monoids and Eilenberg-Moore algebras for finite lists, we define an inverse mapping that defines α in terms of the monoid structure: ∏([x_1, x_2, …, x_n]) = x_1 · x_2 ·…· x_n and ∏([ ]) = 1_X We are now ready to present the key definition from <cit.>, i.e. a recognizable language for a monad M: Let M be a monad. We define an M-language over an alphabet Σ to be a subset of M Σ. We say that a language L ⊆ M Σ is M-definable if there exist: (A, ∏)_Finite M-algebra h : Σ→ A_Input function λ : Σ→{, }_Acceptance function, such that the characteristic function of L is equal to the following composition: M ΣM h M A ∏ A λ{, } Thanks to <ref>, it is not hard to see that for M equal to the monad of finite lists, the definition of M-recognizability is equivalent to <ref>, which further means that finite-list recognizability is equivalent to regularity. There are many other examples of monads, which define important classes of M-definable languages (e.g. see <cit.>). In this paper, let us define two more, the countable order monad and the terms monad: [<cit.>] Define a countable chain over a set X, to be a countable linear order where every position is labelled with an element of X. We say that two chains are equal if there is an isomorphism between them that preserves both the order and the labels[ It is worth noting that the set of all countable sets is not, strictly speaking, a set. However, because we equate all chains modulo isomorphism, the set of all chains over any given X forms a set. This is because, we can select an arbitrary infinite countable set and assume that all positions in every chain are elements of that set. ]. Let us denote the set of all countable chains over X as C X, and let us show that C is a monad. First, let us define the functor structure on C. If f is a function of type X → Y, then C f : C X → C Y is a function that applies f to every label of a chain (and does not modify the linear order). This leaves with defining the monad structure: The singleton operation η : X → C X returns a one element chain, whose only position is labelled by the input letter. The flatten operation μ_X : C C X → C X is defined in terms of the lexicographic product: Let w : C (C X) be a chain of chains, then μ w is defined as follows: * Its positions are pairs (x, y) where x is a position in w and y is a position in w_x (i.e. the label of x in w); * The label of (x, y) is equal to the label of y; * The pairs are ordered lexicographically, i.e. (x_1, y_1) ≤ (x_2, y_2) if x_1 < x_2 or if x_1 = x_2 and y_1 ≤ y_2. It is not hard to verify that this construction satisfies the monad axioms (see <cit.> for details). It follows that we can apply <ref> and to define the class of C-recognizable languages. Interestingly, it turns out that this class is equal to the class of languages that can be recognized by the mso-logic (see <cit.> or <cit.> for details), which is a strong argument for the intuition that C-recognizability corresponds to the intuitive notion of regularity. In particular, if we only consider those C-recognizable languages that only contain ω-words, we obtain the class of ω-regular languages. [<cit.>] Fix a ranked set 𝒮, i.e. a set where every element has an associated arity from the set {0, 1, 2, …}. For every set X, we define T_𝒮 X to be the set of all finite trees where all leaves are labelled with elements from X, and all inner nodes are labelled with elements from 𝒮, in such a way that the number of children of each inner node is equal to the arity of its label. Here is an example for X = {x, y, z} and 𝒮 = {a_arity 2, b_arity 1, c_arity 0}: term-ex0.2 (Observe that the leaves labelled with elements of arity 0 from 𝒮 are treated as inner nodes.) The set T_𝒮 X can also be seen as the set of terms over the signature 𝒮, with variables from X. For example, for the following 𝒮, elements of T_𝒮X are terms of propositional logic: {∨_arity 2, ∧_arity 2, _arity 1, 𝚝𝚛𝚞𝚎_arity 0, 𝚏𝚊𝚕𝚜𝚎_arity 0} For every fixed 𝒮, we define a monad structure for T_𝒮 called the term monad. The function mapping T_𝒮 f applies f to every leaf, and does not modify the inner nodes. The singleton operation returns a tree that consists of a single leaf, labelled by the input argument. Finally, the flatten operation simply unpacks the trees from the leaves, as presented in the following figure: term-flatten-ex0.5 Since T_𝒮 is a monad, we can use <ref> and define the class of T_𝒮-recognizable languages. It turns out that this class coincides with the usual notion of regularity for finite-tree languages – this is because T_𝒮-algebras turn out to be practically the same as deterministic bottom-up tree automata (but without distinguished initial and accepting states – which are replaced by the input and output functions from <ref>). See <cit.> for details. Examples <ref> and <ref> (together with all the examples from <cit.>) show that <ref> is abstract enough to capture the notion of regularity for many different types of objects. On the other hand, it is also concrete enough to allow for an interesting general theory of M-recognizability. Examples of theorems include existence of syntactic algebras (<cit.>) and equivalence of algebra and language varieties (<cit.>). There is also an ongoing quest for relating mso-definability and M-recognizability (see <cit.>). § COMONADS AND TRANSDUCERS In this section, we extend the theory of M-recognizability to transducers. We start by presenting the definition of a comonad, which is the dual notion to a monad. Then, we show that for every functor M, that is both a monad and a comonad, we can define the notion of M-recognizable transductions. Finally, we provide some examples of such functors M, and discuss their M-recognizable transduction classes. Let us start with the definition: A comonad is a functor M equipped with two operations: _X : M X → X and δ_X : M X → M M X We are going to refer to as the extract operation, and to δ as the expand operation. The axioms of a comonad are dual to the axioms of a monad, and can be found in Section <ref> of the appendix. For example, let us consider the functor X^+ of non-empty lists (where the lifting operation f^+ is defined as applying f to every element of the list). For this functor, we can define the comonad structure in the following way. The extract operation returns the last element of a list, and the extend operation transforms a list into a list of all its prefixes: ([x_1, …, x_n]) = x_n δ([x_1, …, x_n]) = [[x_1], [x_1, x_2], …, [x_1, … x_n]] (Observe that the definition of ε crucially depends on the input being a non-empty list.) It is not hard to verify that such δ and ε satisfy the axioms of a comonad. This is not the only comonad structure one can define for X^+. Symmetrically, extract could return the first element, and extend could compute the list of all suffixes: ([x_1, …, x_n]) = x_1 δ([x_1, …, x_n]) = [[x_1, …, x_n], [x_2, …, x_n], …, [x_n]] To tell those two comonads apart, we denote the prefix comonad as L X, and the suffix comonad as L X. Next, let us observe that X^+ also exhibits the structure of a monad. The structure is the same as the one for X^* – the flatten operation flattens a list into a list of lists (note that this preserves nonemptiness), and the unit operation returns a singleton list. This means that functors L X and L X are at the same time both monads and comonads. The key observation made in this paper is that for such functors one can define a natural class of transductions: Let M be a functor that is both a monad and a comonad. We say that a function f : M Σ→ M Γ is an M-recognizable transduction if there exist: (A, ∏)_a finite M-algebra (with respect the monad structure of M) h : Σ→ A_an input function λ : A →Γ_an output function, such that f is equal to the following composition: M ΣM h M A δ M M A M ∏ M A M λ M Γ As it turns out, many interesting classes of transductions can be defined as M-recognizable transducers for a suitable M. In Subsections <ref> and <ref>, we present some examples. §.§ Examples of word-to-word transductions We start by studying L-recognizable transductions. After unfolding the definition, we obtain that each such transduction is defined by a finite semigroup[A semigroup is a monoid that might not have the identity element. A reasoning similar to <ref> demonstrates that semigroups are the Eilenberg-Moore algebras for the monad X^+.] S, an input function h : Σ→ S, and an output function λ : S →Γ. The transduction is then defined in the following way: a_1 a_2 … a_n_Σ^+ ↦ λ( h(a_1)), λ(h(a_1) · h(a_2)), …, λ(h(a_1) ·…· h(a_n))_Γ^+ In other words, we obtain the i-th letter of the output, by taking the i-th prefix of the input, computing the S-product of its h-values, and applying the output function λ. By comparing this with <ref>, we see that this means that the i-th letter of the output is computed based on regular properties of the i-th prefix. This means that L-recognizable transductions are equivalent to a well-studied transducer model called Mealy machines. (See <ref> for the exact definition of Mealy machines and the proof of equivalence.) Similarly, one can show that the class of L-recognizable transductions is equivalent to the right-to-left variant of Mealy machines. Next, we consider length-preserving rational functions, which can be defined as the class of transductions recognized by unambiguous Mealy machines (see <ref> for the definition), or more abstractly as the class of transductions where the i-th letter of the output depends on the i-th letter of the input and on regular properties of the (i-1)-st prefix and (i+1)-st suffix. Below we define a monad/comonad functor of pointed list[ This is a well-known functor and both its monad <cit.> and its comonad <cit.> structures have been studied in the past. (Although, rarely together.) ] that recognizes this class: We define L X to be the set of all non-empty lists over X with exactly one underlined element. For example, [a, b, b, c] is an element of L{a, b, c}. This is clearly a functor, with L f defined as simply applying f to every element of the list (and keeping the underline where it was). The monadic and comonadic operations work as follows: * The singleton operation returns a list with one underlined element, e.g.: η(a) = [a]. * The flatten operation, flattens a list of list, while keeping the double-underlined element: μ([[a, b, c], [d, e, f], [g, h]]) = [a, b, c, d, e, f, g, h] * The extract operation extracts the underlined element, e.g.: ([a, b, c]) = b. * The extend operation generates a series of new lists, each being a copy of the original list, but with a different, consecutive element underlined in each of the copies. Finally, it underlines the copy that is exactly equal to the input list (including the underlined element): δ([a, b, c ]) = [ [a, b, c], [a, b, c], [a, b, c] ] Before we discuss L-definable transducers, let us formulate a lemma about L-algebras. The lemma is based on <cit.> and shows that computing products in L-algebras, boils down to computing two monoid products (for proof, see Section <ref> of the appendix): For every L-algebra (A, ∏), there are two monoids M_L and M_R, together with functions h_L : A → M_L, h_R : A → M_R, such that the value of every A-product ∏([a_1, …, a_i, …, a_n]) depends only on: * the M_L-product of the prefix (i.e. h_L(a_1) ·…· h_L(a_i-1)), * the M_R-product of the suffix (i.e. h_R(a_i+1) ·…· h_R(a_n)), and * the exact A-value of the underlined element (a_i). Moreover, if A is finite then both M_L and M_R are finite as well. We are now ready to discuss L-definable transductions. Each such transduction is, by definition, given by a finite L̅-algebra (A, ∏), an input function h : Σ→ A, and an output function λ : A →Γ. A transduction given in this way computes its i-th output letter as: λ(∏([h(a_1), …, h(a_i), …,h(a_n)])) Thanks to <ref>, we know that we know that there are two monoids M_L and M_R, such that we can compute this value based on the M_L-product of the prefix, M_R-product of the suffix and the value of h(a_i). It follows, by <ref> of regularity, that we can compute the i-th letter of the output based on some regular properties of the prefix, some regular properties of the suffix, and on the i-th input letter. This means that every L-definable transduction is also a rational length-preserving function[ It might be worth mentioning that there is a slight type mismatch between the types of rational length-preserving transductions and L-definable transductions. The former are of the type Σ^+ →Γ^+, and the latter of type L̅(Σ) →L̅(Γ). We can deal with this mismatch, by observing that for the L̅-definable transducers, the position of the underlined element does not influence the underlying output word. See Appendix B.3 for more details. ]. To prove the other inclusion, we use a similar idea to transform an unambiguous Mealy machine into L̅-algebra. (See Appendix B.3 for a more detailed proof.) Here is a table that summarizes all classes of M-definable transductions, we have seen so far: Transduction class Machine model Functor Sequential left-to-right Mealy machines L Sequential right-to-left Right-to-left Mealy machines L Rational lenght-preserving Unambigous Mealy machines L Observe that all those examples are length-preserving. This is a consequence of a more general principle, which can be stated using the shape of a functor: Let 1 = {∙} be a singleton set. For every functor F and every l ∈ F X, we define the shape of l as the element of F 1, obtained by replacing every element of X by ∙: 𝚜𝚑𝚊𝚙𝚎(l) = (F ( x ↦∙ )) l For example, the shapes of both L and L are their lengths, and the shape L̅ is its length and the position of its underlined element. It is not hard to see that all M-definable transductions are shape-preserving: For every M-definable transduction F : M Σ→ M Γ, and for every w ∈ M Σ, it holds that 𝚜𝚑𝚊𝚙𝚎(F(w)) = 𝚜𝚑𝚊𝚙𝚎(w). §.§ Other examples *Infinite words In this section we extend the monad C from <ref> with three different comonad structures C, C, and C̅ (which are analogous to L, L and L̅ from <ref>), and we briefly characterize the resulting classes of transducers. We define C X ⊊ C X, to be the set of all elements of C X that have a maximal element. To see that this is a monad (with the singleton and flatten operations inherited from C), we observe that flatten preserves the property of having maximal elements. Next, we define a comonad structure on X: extract returns the label of the maximal element, and expand labels each position with its prefix, i.e. the position i of δ(l) is labelled with: { x | x ∈ l ∧ x ≤ i }. Observe that all such labels contain maximal elements – the maximal element in the label of i is i itself. As it turns out, the class of C-definable transductions admits a logical characterization: One can show[The proof follows from the fact that C-recognisability is equivalent to mso-definability. However, since mso-transductions fall slightly out of scope for this paper, we are not going to give a precise proof. For other transducer classes in this section, we use a similar mso-recognisability argument.] that it is equivalent to the class of transductions, that preserve the underlying order (see <ref>), and compute the new label for each position i based on mso-formulas that only see the positions ≤ i. The definition of C and the characterization of C-definable transduction are analogous. Next, we define C̅ X to be the set of all elements of C X where exactly one position is underlined. The monad structure of C̅ X is a generalization of the monad structure of L̅: singleton returns a single underlined element, and flatten flattens the input and underlines the doubly underlined position. Similarly, the comonad structure of C̅ X generalizes the comonad structure of L̅: extract returns the label of the underlined element, and expand labels every position i of its input with a copy of the input where i is the underlined position, and underlines the copy that corresponds to the underlined position of the input. The class of C̅-definable transduction also admits a logical characterization: It is equivalent to the class of transductions that preserve the underlying order and the position of the underline, and compute the output labels based on mso-formulas that see the entire input. *Terms Finally we define T̅_𝒮 to be pointed version of the term functor from <ref>, and we equip it with structures of a monad and a comonad. The construction is analogous[ It seems that the pointing construction is a general way of equipping a monad with a comonad structure. ] to L̅ and C̅. We define T̅_𝒮 X, to be the set of trees (from T̅_̅𝒮̅ X) with exactly one underlined leaf. The monadic and comonadic operations are defined analogously as for L̅ and C̅. As it turns out, the class of T̅_̅𝒮̅ definable transductions, also admits a logical characterization: it is equivalent to the class of tree-to-tree transductions that only modify the labels of the leaves (this is a consequence of <ref>) and calculate the output label for each leaf based on mso-formulas that have access to the entire input tree. § COMPOSITIONS OF RECOGNIZABLE TRANSDUCERS So far we have introduced, and presented a few examples of M-recognizable transductions. In this section we are going to prove <ref> which states that M-recognizable transductions are closed under compositions, i.e. if f : M Σ→ M Γ and g : M Γ→ M Δ both are M-recognizable transductions, then so is their composition g ∘ f : M Σ→ M Δ. In addition to the axioms of a functor, monad, and comonad, we have seen so far, the proof of the theorem requires some additional coherence axioms which relate the monadic and the comonadic structures of M. Here are three examples of such axioms respectively called flatten-extract, singleton-expand, and singleton-extract[ To the best of our knowledge, this axiom has not appeared so far in the literature. ]: M M X M X X M X X M X M X X M X M M X X ["μ_X ", from=1-1, to=1-3] ["ε_X ", from=1-3, to=3-3] ["ε_M X"', from=1-1, to=3-1] ["ε_X"', from=3-1, to=3-3] ["η_X", from=1-4, to=1-6] ["η_X"', from=1-4, to=3-4] ["M η_X"', from=3-4, to=3-6] ["δ_X", from=1-6, to=3-6] ["η_X", from=1-7, to=1-9] ["ε_X", from=1-9, to=3-9] ["id"', from=1-7, to=3-9] The other axioms postulate the existence of an additional structure on M. §.§ The -structure and its axioms For a comonad M, let us consider the following operation[The operation has already been studied in the context of functional programming. In this context, M does not need to be a (full) comonad, it is, however, required to implement the operation 𝚐𝚎𝚝 : M X → X, which in our case is equal to . In this context, the pair (𝚐𝚎𝚝,) is usually referred to as a lens. See <cit.> for details. See <cit.> for the original reference.]: _A : M A × A → M A The intuition behind is that it replaces the focused element of M A with the given element from A. The intuition behind the focused element is that this is the element that is going to be returned by the extract operation. For example, the focused element in L̅ is the underlined element, and in L it is the last element of the list. Here are two examples of : ([1, 2, 3, 4], 7) = [1, 7, 3, 4]_L̅ ([1, 2, 3], 5) = [1, 2, 5]_L The goal of this subsection is to formalize this intuition in terms of axioms. First, we assume that is a natural transformation (see <ref> for details). Next, we assume the following axioms that relate and . They are called get-put, put-get, and put-put[The axioms and their names come from the lens-related research. See <cit.>.]: M A M A × A M A × A M A (M A × A) × A M A × A M A A M A × A M A ["id"description, from=1-1, to=3-3] ["⟨ id, ε_A ⟩"description, from=1-1, to=1-3] ["_A"description, from=1-3, to=3-3] ["π_2 "description, from=1-4, to=3-6] ["ε_A"description, from=1-6, to=3-6] ["𝚙𝚞𝚝_A "description, from=1-4, to=1-6] ["𝚙𝚞𝚝_A"description, from=1-9, to=3-9] ["𝚙𝚞𝚝_A"description, from=3-7, to=3-9] ["π_2 ×𝚒𝚍"description, from=1-7, to=3-7] ["𝚙𝚞𝚝_A × id"description, from=1-7, to=1-9] Here the functions ⟨ f, g ⟩, f × g, π_2 are defined as follows: ⟨ f, g ⟩(x) = (f (x), g (x)) (f × g)(x_1, x_2) = (f(x_1), g(x_2)) π_2(x, y) = y The following axioms called put-associativity and singleton-put relate with the structure of a monad[To the best of our knowledge this axiom has not appeared previously in the literature.]: M M A × M A × A M M A × M A M M A A × A MA × A M M A × A M A × A M A A M A["𝚙𝚞𝚝_A", from=3-3, to=3-4] ["μ_A × id", from=3-1, to=3-3] ["𝚙𝚞𝚝_M A× id"description, from=1-1, to=3-1] ["id ×𝚙𝚞𝚝_A", from=1-1, to=1-3] ["𝚙𝚞𝚝_M A", from=1-3, to=1-4] ["μ_A"description, from=1-4, to=3-4] ["π_2", from=1-5, to=3-5] ["η_A", from=3-5, to=3-7] ["η_A × id "', from=1-5, to=1-7] ["𝚙𝚞𝚝_A"', from=1-7, to=3-7] The final axiom relates all the structures studied in this paper: monad, comonad, and 𝚙𝚞𝚝. Before we present it, we need to define the strength of a functor[ This definition of is specific to . We briefly discuss other categories in <ref>. ]. For every functor F, we define _(A, B) : A × M B → M (A × B): (a, l) = F ( x ↦ (a, x) ) l Intuitively, the function _(A, B) equips each element B under the functor F with a copy of a ∈ A. Here is an example, for F equal to the list functor: (c, [a, b, a, b]) = [(c, a), (c, b), (c, a), (c, b)] We are now ready to present the final coherence axiom, called flatten-expand[To the best of our knowledge the axiom has not appeared before in the literature.]: M M M A M M M A M M A M M A M A["μ_A"description, from=3-1, to=4-3] ["δ_A"description, from=4-3, to=3-5] ["μ_M A"description, from=1-4, to=3-5] ["δ_M A"description, from=3-1, to=1-2] ["M 𝚠𝚘𝚛𝚔", from=1-2, to=1-4] where the function 𝚠𝚘𝚛𝚔 is defined as the following composition: M M A ⟨𝕀, ⟩ M M A × M A 𝕀×δ M M A × M M A 𝚜𝚝𝚛𝚎𝚗𝚐𝚝𝚑 M (M M A × M A) M M M M A M μ M A Let us now briefly present the intuition behind the flatten-expand axiom. The starting point M M X represents a structure partitioned into substructures (e.g. a list partitioned into sublists), which we would like to expand using δ. The bottom path of the diagram represents the straightforward approach: First, it flattens the input using μ_A (forgetting about the substructure partitions), and then it applies the δ_A function. The flatten-expand axiom asserts that this can be done in a way that respects the initial partitions. This way is represented by the top path of the diagram: First, it applies the δ_M A function to expand the top structure; then it applies the 𝚠𝚘𝚛𝚔 function independently to each of the substructures using M 𝚠𝚘𝚛𝚔 (this can also be seen as a concurrent computation); and finally it aggregates the results of 𝚠𝚘𝚛𝚔 using μ_M A. (See <ref> for a step-by-step example.) It might also be worth mentioning that the flatten-expand axiom has an alternative formulation in terms of a bialgebra (see <ref>). Finally, let us mention that it is not hard to all the examples of monad/comonad functors from <ref> with the natural operation, and show that they satisfy all the axioms we have introduced. §.§ Contexts In this section we use the structure to introduce contexts for Eilenberg-Moore algebras. This concept plays an important role in the proof of <ref>. Additionally, we would like to highlight the potential of contexts as an independently interesting tool for studying Eilenberg-Moore algebras – this point is illustrated by <ref>. In this subsection, we only assume that M is a monad equipped with the structure – it does not depend on the comonad structure of M. Let (A, ∏) be an M-algebra. For every element l ∈ M A, we define its context to be the following function _l : A → A: _l(x) = ∏((l, x)) In other words, the context of l takes an element x ∈ A, replaces the focused element of l with x, and calculates the product of the resulting M A. Let us now discuss some properties of the contexts. Thanks to the put-put axiom one can show that the context of l does not depend on its focused element (the formal proof is verified in Coq as , see <ref>). For every l ∈ MA, and every a ∈ A, the context of l is equal to the context of (l, a), i.e. _l = _(l, a). Similarly, using the singleton-put axiom, one can show that the context of every singleton is the identity function (the formal proof is verified in Coq as , see <ref>). For every a ∈ A, it holds that _(η_A a) = 𝕀. Now, let us consider the set of all contexts: For every M-algebra (A, α), we define the set C_A ⊆ (A → A) to be the set of all possible contexts, i.e. C_A = {_l | l ∈ M A}. The important property of C_A is that it is closed under compositions: For every f, g ∈ C_A, it holds that f ∘ g ∈ C_A. It follows that C_A is a transformation monoid of A. It follows that the mapping A ↦ C_A allows us to transform an arbitrary M-algebra into a monoid. This is why, we believe that contexts, are an interesting tool for studying Eilenberg-Moore algebras. To illustrate this point let us show how to generalize the definition of a group: We say that an M-algebra A is an M-group if C_A is a group (i.e. for every function f ∈ C_A, its inverse f^-1 also belongs to C_A). In order to validate this definition, let us show that for M = L the definition of M-group coincides with the usual definition of a group[ If we only consider the monad structure then L is equal to X^+ (i.e. the monad of non-empty lists). However, we point out that we consider L, because the definition of depends on whether we consider L or L. (The proof for L is, however, analogous). ] (remember that L-algebras are semigroups). For this, let us fix an L-algebra S (i.e. a semigroup). Observe now that, by definition, the context of an element [s_1, …, s_n] is equal to the following function: x ↦ s_1 ·…· s_n-1· x It follows that every element of C_S is of the form x ↦ s x, where s is an element of S^1, where S^1 is the smallest monoid that contains S, i.e.: S^1 = S if S already contains and identity element S + {1} otherwise Here 1 denotes the formal identity element whose operations are defined as 1 · x = x · 1 = x. Moreover C_S is isomorphic to S^1: (x ↦ s_1 · x) ∘ (x ↦ s_2 · x ) = (x ↦ s_1 · s_2 · x) This finishes the proof, as it is not hard to see that S is a group if and only if S^1 is a group. §.§ Composition theorem We are now ready to formulate and prove the main theorem of this article: Let M be a functor that is both a monad and a comonad, for which there exists a : M A × A → M A, that satisfies the axioms mentioned in this section, i.e.: flatten-extract, get-put, put-get, put-associativity, and flatten-expand. Then, the class of M-definable transductions is closed under compositions. The reminding part of this section is dedicated to proving <ref>. After unfolding the definitions, this boils down to showing that for each pair of M-algebras (S_1, ∏_1), (S_2, ∏_2), and for every h_1 : Σ→ S_1, h_2 : Γ→ S_2, λ_1: S_1 →Γ, λ_2 : S_2 →Δ, there exists an M-algebra (S_3, ∏_3) and functions h_3 : Σ→ S_3, λ_3 : S_3 →Δ, that make the following diagram commute: M Σ M S_1 M MS_1 M S_1 M Γ M S_3 M S_2 M M S_3 M M S_2 M S_3 M S_2 M Δ["M h_1", from=1-1, to=1-2] ["δ", from=1-2, to=1-3] ["M ∏_1", from=1-3, to=1-4] ["M λ_1", from=1-4, to=1-5] ["M h_2", from=1-5, to=2-5] ["δ", from=2-5, to=3-5] ["M ∏_2", from=3-5, to=4-5] ["M λ_2", from=4-5, to=5-5] ["M h_3"', dashed, from=1-1, to=2-2] ["δ"', dashed, from=2-2, to=3-3] ["M ∏_3"', dashed, from=3-3, to=4-4] ["M λ_3"', dashed, from=4-4, to=5-5] As our S_3 we are going to use the following set: S_3 = S_1 × (S_1^S_1→ S_2 ) (Note that S_1^S_1 is a notation for S_1 → S_1 – we mix the arrow notation and the exponent notation for visual clarity.) Because of its similarities with the wreath product for semigroups, we call our construction for (S_3, α_3) as the generalized wreath product, or M-wreath product of (S_1, α_1) and (S_2, α_2). (In the appendix we give a definition of the classical wreath product and compare it with the generalized wreath product – reading this part of the appendix could make it easier to understand the construction presented below.) Before we define α_3, h_3 and λ_3, we describe what we would like the composition[ By <cit.>, this composition could be used to define the product operation on S_3. However, since the lemma has extra assumptions, we show this function only for intuition.] M ΣM h_3 M S_3 α_3 S_3 to do – this way we can present some intuitions behind S_3. We start with a w ∈ M Σ and we would like to produce a pair S_1 × (S_1^S_1→ S_2). The first component (i.e. S_1) is simply defined as the S_1-product of the input: M ΣM h_1 M S_1 α_1 S_1 The interesting part is the second component (i.e. S_1^S_1→ S_2), which represents the S_2 product of the input. In order to compute it, we first apply the first M-transduction (i.e. (S_1, h_1, λ_1)), and then we compute the S_2-product of the result. This means that the S_2 product depends on the S_1-context in which we evaluate the input, so we provide it as the S_1^S_1 argument (intuitively we are only interested in the functions from C_S_1, as defined in <ref>, but the definition makes formal sense for all functions S_1^S_1). Here is how to compute the S_2-value based on the input word w ∈ M Σ and the context c ∈ S_1^S_1: We start by computing the S_1-products of the views (while, for now, ignoring the context c): S_1^S_1× M Σ𝕀× M h_1 S_1^S_1× M S_1 𝕀×δ S_1^S_1× M M S_1 𝕀× M α_1 S_1^S_1× M S_1 Next, we apply the context c to each of the prefix products, and compute its Γ-value: S_1^S_1× M S_1 M (S_1^S_1× S_1) M 𝚊𝚙𝚙 M S_1 M λ_1 M Γ Finally, we compute the S_2-product of the result: M Γ M h_2 M S_2 α_2 S_2 We are now ready to define λ_3, h_3, and α_3. In order to compute λ_3, we compute the S_2-value in the empty context (represented by 𝕀∈ S_1^S_1), and then apply λ_2: λ_3((v_1, v_2)) = λ_2(v_2(𝕀)) The function h_3 : Σ→ S_1 × (S_1^S_1→ S_2) is defined as follows: In order to compute the S_1 value we simply apply h_1 to the input letter, and in order to compute the S_2 value given the context c ∈ S_1^S_1, we apply h_1, c, λ_1 and h_2: h_3(a) = ( h_1(a), c ↦ h_2(λ_1(c(h_1(a)))) ) Finally, we define the product operation: α_3 : M ( S_1 × (S_1^S_1→ S_2) ) → (S_1 × (S_1^S_1→ S_2)) We define α_3 using two auxiliary functions f_1 and f_2: α_3(l) = ( f_1(l), c ↦ f_2(c, l) ) The first function f_1 : M (S_1 × (S_1^S_1→ S_2)) → S_1 computes the product of the S_1-values: M ( S_1 × (S_1^S_1→ S_2) ) M π_1 M S_1 α_1 S_1 The second function f_2 : S_1^S_1× M (S_1 × (S_1^S_1→ S_2)) → S_2 is more complicated. We start by computing for each element, its view on the S_1-values while keeping its (S_1^S_1→ S_2)-value: S_1^S_1× M (S_1 × (S_1^S_1→ S_2)) 𝕀×δ S_1^S_1× M M (S_1 × (S_1^S_1→ S_2)) 𝕀×⟨ M π_1, π_2 ∘⟩ S_1^S_1× M ( (M S_1) × (S_1^S_1→ S_2) ) Then we compute the context for each of those S_1-views: S_1^S_1× M ( (M S_1) × (S_1^S_1→ S_2) ) 𝕀× M (×𝕀) S_1^S_1× M ( S_1^S_1× (S_1^S_1→ S_2) ) Next, we compose the initial context with each of the intermediate contexts: S_1^S_1× M ( S_1^S_1× (S_1^S_1→ S_2) ) M (S_1^S_1× S_1^S_1× (S_1^S_1→ S_2)) M ((∘) ×𝕀) M (S_1^S_1× (S_1^S_1→ S_2)) Now, in each position, we apply the function to the argument: M (S_1^S_1× (S_1^S_1→ S_2)) M ((x,f) ↦ f(x)) M S_2 Finally, we compute the product of the S_2 values: M S_2 α_2 S_2 This finishes the construction of (S_3, α_3). Now we need to show that it is indeed an M-algebra: The generalized wreath product (S_3, α_3), as defined above, is a valid M-algebra, i.e. for every l ∈ M M S_3, and every x ∈ S_3 it satisfies the following axioms (see <ref>): α_3 ( μ (l)) = α_3((M α_3)(l)) and α_3(η(x)) = x The proof of <ref> is quite complex – the main reason for this is that the definition of (S, α_3) is rather involved. In contrast, the idea behind the proof is straightforward: we unfold all definition and perform equational reasoning using the axioms. For this reason, we decided to formalize the proof in the Coq theorem prover – it can be found as theorems and in the attached Coq file, see <ref>. Finally, we show that the M-transduction (S_3, h_3, λ_3) computes the required compositions: The M-transduction (S_3, h_3, λ_3) is equivalent to the composition of M-transductions (S_1, h_1, λ_1) and (S_2, h_2, λ_2). Similarly as for <ref>, we prove <ref> by unfolding the definitions and applying the equational reasoning. It is called in the formalization, see <ref>. The proof of <ref> finishes the proof of <ref>. § FURTHER WORK 1. Shape-modifying transductions. Many important classes of transduction can modify the shape of their inputs. Examples of such classes for word-to-word transductions include regular transductions (defined, for example, by two-way transducers, or mso-transductions) or polyregular transductions (defined, for example, by for programs or mso-interpretations <cit.>). Extending the definitions of M-definable transductions to capture those classes is, in our opinion, an interesting research direction. As a first step towards this goal, let us propose the following relaxation of M-definable transduction. The output function λ is of type A → M Γ (rather than A →Γ), and the transduction is defined as follows: M ΓM h M A δ M M A M α M A M λ M M A μ M A For example, for M = L this new class corresponds where the transitions are allowed to output more than one letter (but have to output at least one letter). 2. Aperiodicity. We say that a semigroup S is aperiodic, if there is no monomorphism G → S, where G is a non-trivial group. This is an importation notion in the theory of regular languages and transducers. For example, it very often coincides with first-order definability (e.g. <cit.>). Thanks to <ref>, we can extend the definition of aperiodicity to arbitrary M-algebras (for Ms that are monads and comonads, and are equipped with the ). Studying this new notion of generalized aperiodicity could be an interesting research direction. 3. Krohn-Rhodes decompositons. The Krohn-Rhodes decomposition theorem (<cit.>) shows how to present every semigroup with wreath products of groups and a 3-element monoid called flip-flop monoid. Its original proof starts by decomposing Mealy machines, and then it shows how to decompose monoids. Since in our paper, we generalize the definitions of a group, a wreath product, and a Mealy machine, we believe that there is potential for generalizing the original Krohn-Rhodes theorem to M-algebras. 4. Other categories. A natural follow-up of this paper would be generalizing it from the category to arbitrary Cartesian closed categories. For now the biggest obstacle to such a generalization seems to be the function from <ref>. See <ref> for more details. § OMITTED DETAILS FROM <REF> §.§ Monad axioms Let us present the omitted axioms of a monad: First, both η and μ should be natural. In this particular case, this means that the following two diagrams should commute[ We hope that the notation of commutative diagrams is self-explanatory. See <cit.> for a formal description. ]: for every function f : X → Y (for the general definition of naturality, see <cit.> or <cit.>): M M X M X X M X M M Y M Y Y M Y["μ_X", from=1-1, to=1-4] ["M (M f)"', from=1-1, to=3-1] ["M f", from=1-4, to=3-4] ["μ_Y", from=3-1, to=3-4] ["f", from=1-6, to=3-6] ["M f", from=1-9, to=3-9] ["η_X", from=1-6, to=1-9] ["η_Y", from=3-6, to=3-9] In addition to being natural, η and μ should make the following diagrams commute: M M M X M M X M X M M X M M X M X M M X M X["μ_MX", from=1-1, to=1-4] ["μ_X", from=1-4, to=3-4] ["μ_X", from=3-1, to=3-4] ["M μ_X"', from=1-1, to=3-1] ["η_M X"', from=1-6, to=1-8] ["μ _X"', from=1-8, to=3-8] ["id"description, from=1-6, to=3-8] ["M η_X"', shift left, from=1-6, to=3-6] ["μ_X", from=3-6, to=3-8] § OMITTED DETAILS FROM <REF> §.§ Comonad axioms Let us present the omitted axioms of a comonad. First, both and δ have to be natural, i.e. for every f : X → Y they have to satisfy the following commutative equations: M X M M X M X X M Y M M Y M Y Y ["Mf", from=1-1, to=3-1] ["M (M f)", from=1-4, to=3-4] ["δ_X", from=1-1, to=1-4] ["δ_Y", from=3-1, to=3-4] ["M f", from=1-6, to=3-6] ["ε_X", from=1-6, to=1-9] ["ε_Y", from=3-6, to=3-9] ["f", from=1-9, to=3-9] In addition to being natural, δ and ε should make the following diagrams commute: M X M M X M X M M X M M X M M M X M M X M X["δ_X"', from=1-1, to=3-1] ["δ_X", from=1-1, to=1-3] ["δ_M X", from=1-3, to=3-3] ["M δ_X"', from=3-1, to=3-3] ["δ_X", from=1-5, to=1-7] ["δ_X", from=1-5, to=3-5] ["M ε_X", from=3-5, to=3-7] ["ε_MX", from=1-7, to=3-7] ["id"description, from=1-5, to=3-7] §.§ Mealy machines Mealy machines are one of the most basic, and very well-studied models of transducers. They were introduced by <cit.>. In this section, we give a full definition of Mealy machines, and show that they are equivalent to L-definable transductions. Let Σ and Γ be finite alphabets. A Mealy machine of type Σ^+ →Γ^+ consists of: * a finite set of states Q; * an initial state q_0 ∈ Q; and * a transition function: δ : Q_current state×Σ_input letter→Q_new state×Γ_output letter (Observe that contrary to a deterministic Mealy machine does not have accepting states.) A Mealy machine defines the following function[ Usually the type of a Mealy machine's function is defined as Σ^* →Γ^*. However, this does not make much difference. This is because Mealy machines are a length preserving model, so for the empty input they always return the empty output, and for a non-empty input they always return a non-empty output. It follows that the function Σ^* →Γ^* is uniquely defined by the function Σ^+ →Γ^+. ] Σ^+ →Γ^+. It starts in the initial state, and processes the input word letter by letter. For each symbol w_i, the machine transitions to a new state q' and outputs a symbol x ∈Γ, as directed by the transition function δ(q, w_i) = (q', x). When the machine has processed it entire input, the output word is obtained as the sequence of all letters from Γ calculated by the machine. For example, here is a Mealy machine of type {a, b}^+ →{c, d}^+ that calculates the function “Change the first a to c, and all other letters to d”: mealy-ex The class of transductions recognized by Mealy machines is equivalent to L-recognizable transductions. ⊆: As explained in <ref>, every L-recognizable transduction is given by a semigroup S, an input function h : Σ→ S and an output function S →Γ. Such a transduction computes the following function: a_1 a_2 … a_n_Σ^+ ↦ λ( h(a_1)), λ(h(a_1) · h(a_2)), …, λ(h(a_1) ·…· h(a_n))_Γ^+ Let us show how to translate such a transduction into a Mealy machine: We start by extending the semigroup S to a monoid S^I = S ∪{1}, where 1 the formal identity element (i.e. 1 · x = x · 1 = x for every x ∈ S^I). Now we say the set of states of the Mealy machine is equal to S', its initial state is 1, and its transition function is given by the following formula: δ(s, a) = (s · h(a), λ(s· h(a))) This way the Mealy machine computes the S-products of the h-values of the input prefixes, and outputs their λ-values, computing the same function as the original L-recognizable transduction. ⊇: Now, we are given a Mealy machine of type Σ^+ →Γ^+ given by (Q, q_0, δ), and we want to construct (S, h, λ), such that the L-transduction given by (S, h, λ) is equivalent to the initial Mealy machine. For this purpose let us define the behaviour of an infix w ∈Σ^+, which is an element from the following set: S = Q_The state in which the Mealy machine enters the infix from the left→Q_The state in which the Mealy machine exists the infix from the right×Γ_The letter that the Mealy machine outputs while exiting the infix Observe that those behaviours are compositional, if we know that the behaviours of words w, v ∈Σ^+ are equal respectively to f_w and f_v, then we know that the behaviour of wv is equal to the following function: f_wv(q) = f_v(π_1(f_w(q))) where π_1 : X × Y → X represents the projection to the first coordinate. This gives us a semigroup structure on the set of behaviours (where f · g is defined as g ∘π_1 ∘ f). We take this monoid of behaviours as our semigroup S. Then, we define h: Σ→ S and λ : S →Γ in the following way: h(a) = q ↦δ(q, a) λ(f) = π_2(f(q_0)) This way, the L transduction given by (S, h, λ) computes the behaviour of each prefix, and outputs the letter that the machine would output if it entered the prefix in q_0. It is not hard to see that this computes the same function as the original Mealy machine. §.§ Rational length-preserving transductions In this section, we present the definition of rational length-preserving transductions. We define them using unambiguous Mealy machines[Although, we could not find a reference to this exact model in the literature, we believe that it belongs to the field's folklore, as it can be seen as a length-preserving version of the functional NFA with output (see <cit.>, or <cit.>).] and show that they are equivalent to L̅-definable transductions. Let Σ and Γ be finite alphabets. A nondeterministic Mealy machine of type Σ^+ →Γ^+ consists of: * A finite set of states Q. * A subset I ⊆ Q of initial states, and a subset F ⊆ Q of final (i.e. accepting) states. * A transition relation: δ : Q_current state×Σ_input letter×Q_new state×Γ_output letter A run of a nondeterministic Mealy machine over an input word w ∈Σ^+ is a sequence of states, starting from an initial state q_0 ∈ I, and ending in a final state q_n ∈ F, such that for each symbol w_i of w, there is a transition in δ that reads w_i and takes the machine from state q_i-1 to state q_i. Observe that each run produces an output word in Γ^+. The machine is called unambiguous if, for every input word w ∈Σ^+, there exists exactly one run. The transduction defined by an unambiguous Mealy machine is a function Σ^+ →Γ^+ that maps a word w ∈Σ^+ to the output of the machine's only run for w. (Observe that the unambiguity of the machine guarantees that for every input, there is exactly one output, despite the machine's nondeterministic transition relation.) For example, here is an unambiguous Mealy machine of type {a, b}^+ →{a, b}^+ that computes the function “replace the first letter with the last one”: Rational-ex0.4 Observe now that there is a slight type mismatch between the types of rational length-preserving transductions and L-definable transductions. The former are of the type Σ^+ →Γ^+, and the latter of type L̅(Σ) →L̅(Γ). To deal with this mismatch, we notice that all L-definable transductions satisfy the following property (this is an immediate consequence of the definition of L̅-definable transductions): We say that a length-preserving function f : L̅ X →L̅ Y is underline-independent if: * For every v ∈L̅, the underlying word of f(v) does not depend on the position of the underline in w, i.e. for every w ∈ X^+: 𝚏𝚘𝚛𝚐𝚎𝚝(f( 𝚞𝚗𝚍𝚎𝚛𝚕𝚒𝚗𝚎_i(w) )) = 𝚏𝚘𝚛𝚐𝚎𝚝(f( 𝚞𝚗𝚍𝚎𝚛𝚕𝚒𝚗𝚎_j(w) )) where 𝚞𝚗𝚍𝚎𝚛𝚕𝚒𝚗𝚎_i is the function that underlines the ith element of the input, and 𝚏𝚘𝚛𝚐𝚎𝚝 is the function that casts a pointed list into a normal list (by erasing the underline). * For every v ∈L̅ the index of the underlined position in v is equal to the index of the underlined position in f(v). Observe now that the following function is a bijection between length-preserving, underline-independent functions L̅(Σ) →L̅(Γ) and length-preserving functions Σ^+ →Γ^+: ϕ(f) = 𝚏𝚘𝚛𝚐𝚎𝚝∘ f ∘𝚞𝚗𝚍𝚎𝚛𝚕𝚒𝚗𝚎_1 From now on, we are going to use this bijection implicitly, equating the two types of functions. Next, let us show that unambiguous Mealy machines are equivalent to L̅-definable transductions. We start with the proof of <ref> (the lemma and its proof are based on <cit.>): <ref> For every L̅-algebra (A, ∏), there are two monoids M_L and M_R, together with functions h_L : A → M_L, h_R : A → M_R, such that the value of every A-product ∏([a_1, …, a_i, …, a_n]) depends only on: * the M_L-product of the prefix (i.e. h_L(a_1) ·…· h_L(a_i-1)), * the M_R-product of the suffix (i.e. h_R(a_i+1) ·…· h_R(a_n)), and * the exact A-value of the underlined element (a_i). Moreover, if A is finite then both M_L and M_R are finite as well. For every element a ∈ A, we define its left transformation to be the following function of type A → A: x ↦∏([a, x]) Observe that the set of all left transformations equipped with function composition forms a monoid. This is because, thanks to the associativity axiom, the left behaviour of ∏[x, y] is equal to the composition of left behaviours of x and y. We define M_L to be the monoid of left transformations, and h_L to be the function that maps elements of A to their left behaviours. Values M_R and h_L are defined analogously, but for right behaviours. Observe that the value ∏[a_1, …, a_i, …, a_n] can be computed as s(p(a_i)), where p is the M_L product of the prefix, and s is the M_R product of the suffix (as defined in the statement). We are now ready to prove the equivalence of nondeterministic Mealy-machines and L̅-definable transductions: The class of transductions computed by unambiguous Mealy machines is equal to L̅-definable transductions.   ⊆: A L̅ definable transduction is given by a L̅-algebra (A, α), an input function h : Σ→ S and an output function λ : A →Γ. The i-th letter of the output is then computed as λ(α([h(w_1), …, h(w_i), …, w_n]]). By <ref>, we know that there are two monoids M_R, M_L, and functions h_R, h_L, g, such that this can be computed as g(p_L, h(w_i),s_R), where p_L = h'_L(w_1) ·… s_R = h'_L(w_i-1) and p_R = h'_R(w_i+1) ·…· h'_R(w_n), for h'_L = h_L ∘ h and h'_R = h_R ∘ h. Based on this observation we can define an unambiguous Mealy machine. Intuitively the machine is going to remember in its state the M_L-product of the prefix and the M_R-product of the suffix, and it is going to use g to compute the output letters. Formally, the machine's set of states is equal to M_L × M_R, its initial states are {1}× M_R and its final states are M_L ×{1}. Finally, its transition function consists of the following tuples, for every m_L ∈ M_L, m_R ∈ M_R, and x ∈Σ: ((m_L, m_R · h'_R(x))_previous state, x_input letter, (m_L · h'_L(x), m_R)_next state,g(m_L, h(x), m_R)_output letter) Thanks to this definition, we know that the only correct run for an input w ∈Σ^* is the run that correctly evaluates the monoid products for all prefixes and suffixes. It follows that the machine is unambiguous, and correctly computes the output of the original L̅-transduction. ⊇: Now, we are given an unambiguous Mealy machine Σ^+ →Γ^+ defined by some (Q, I, F, δ), and we show how to transform it into a L̅-definable transduction. We start by defining the transition semigroup for the Mealy machine. It consists of behaviours, which are analogous to the deterministic behaviours from <ref>, but are relations instead of functions, and they ignore the output letter (in other words, it is the transition monoid for the underlying NFA, as defined e.g. in <cit.>): M = Q_The state in which the Mealy machine enters the infix from the left×Q_The state in which the Mealy machine exists the infix from the right The product operation in M is simply the composition of relations: f · g = g ∘ f. Let us now use M to define the L̅-algebra (A, α). We start with the underlying set A = M ×Σ× M. Before we define the product operation, let us show how to cast element of A to M: t(m_1, a, m_2) = m_1 ·δ(a) · m_2 where δ(a) : Q × Q is the partial application of the transition relation (which computes the behaviour for the single letter a). We are now ready to define the product: α( [ (p_1, a_1, s_1), …, (p_i, a_i, s_i), … (p_n, a_n, s_n)]) = (t(p_1, a_1, s_1) ·…· t(p_i-1, a_i-1, s_i-1) · p_i, a_i, s_i · t(p_i+1, a_i+1, s_i+1) · t(p_n, a_n, s_n)) It is not hard that this α satisfies the algebra axioms (the singleton-mult axiom is straightforward, and the associativity axiom follows from the associativity of M). Next, we define h : Σ→ A as h(a) = (1, a, 1). Finally, we define λ : A →Γ, in the following way: λ(p, a, s) = b, if there is a transition q a/b q', an initial state q_0 and a final state q_n such that (q_0, q) ∈ p and (q', q_n) ∈ s. Thanks to the unambiguity of the Mealy machine, we know that λ is well-defined. We finish the proof by noting that the L̅-transduction defined by (A, α), h and λ is by design equivalent to the original Mealy machine. § OMITTED DETAILS FROM <REF> §.§ Naturality of The naturality of means that for every function f : X → Y, the following diagram commutes (for the general definition of naturality, see <cit.> or <cit.>): M X × X M X M Y × Y M Y["𝚙𝚞𝚝"description, from=1-1, to=1-3] ["(M f) × f"description, from=1-1, to=3-1] ["𝚙𝚞𝚝"description, from=3-1, to=3-3] ["M f"description, from=1-3, to=3-3] §.§ The flatten-expand axiom In this section we give a step-by-step example of the flatten-expand axiom for M = L, we start by restating the axiom: M M M A M M M A M M A M M A M A["μ_A"description, from=3-1, to=4-3] ["δ_A"description, from=4-3, to=3-5] ["μ_M A"description, from=1-4, to=3-5] ["δ_M A"description, from=3-1, to=1-2] ["M 𝚠𝚘𝚛𝚔", from=1-2, to=1-4] where 𝚠𝚘𝚛𝚔 is defined as the following composition: M M A ⟨𝕀, ⟩ M M A × M A 𝕀×δ M M A × M M A 𝚜𝚝𝚛𝚎𝚗𝚐𝚝𝚑 M (M M A × M A) M M M M A M μ M A For the purpose of our example, let us consider A = {1, …, 7}, and let us consider the following input [[1, 2], [3, 4], [5, 6, 7]] ∈ A. The bottom path works as follows: [[1, 2], [3, 4], [5, 6, 7]] μ [1, 2, 3, 4, 5, 6] δ [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7]] Now let us focus on the top path. Here is the first step: [[1, 2], [3, 4], [5, 6, 7]] δ[[[1,2]], [[1, 2], [3, 4]], [[1, 2], [3, 4], [5, 6, 7]]] The next step in the top path is M 𝚠𝚘𝚛𝚔 which applies the work function in parallel to all elements of the top list. Let us show, how it works on the last element, i.e. on [[1, 2], [3, 4], [5, 6, 7]]: [[1, 2], [3, 4], [5, 6, 7]] ⟨ id, ⟩ ([[1, 2], [3, 4], [5, 6, 7]], [5, 6, 7] ) id ×δ ([[1, 2], [3, 4], [5, 6, 7]], [[5], [5, 6], [5, 6, 7]] ) [ ( [[1, 2], [3, 4], [5, 6, 7]], [5] ), ( [[1, 2], [3, 4], [5, 6, 7]], [5, 6] ), ( [[1, 2], [3, 4], [5, 6, 7]], [5, 6, 7] ) ] M [ [[1, 2], [3, 4], [5]], [[1, 2], [3, 4], [5, 6]], [[1, 2], [3, 4], [5, 6, 7]] ] M μ [ [1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7]] In a similar fashion one can compute 𝚠𝚘𝚛𝚔 for the other sublists: [[1, 2]] 𝚠𝚘𝚛𝚔 [[1], [1, 2]] [[1, 2], [3, 4]] 𝚠𝚘𝚛𝚔 [[1, 2, 3], [1, 2, 3, 4]] [[1, 2], [3, 4], [5, 6, 7]] 𝚠𝚘𝚛𝚔 [ [1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7]] Using those results of 𝚠𝚘𝚛𝚔, we can trace the top path of the diagram: [[1, 2], [3, 4], [5, 6, 7]] δ [ [[1,2]], [[1, 2], [3, 4]], [[1, 2], [3, 4], [5, 6, 7]] ] M 𝚠𝚘𝚛𝚔 [ [[1],[1, 2] ], [[1, 2, 3],[1, 2, 3, 4]], [[1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7]]] μ [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7]] Which means that both in the top and in the bottom path, we obtain the same result. §.§ Flatten-expand axiom as bialgebra In this section, we show how to describe the flatten-expand axiom in the language of bialgebras. We start with the definition of coalgebras for a comonad (which is the dual of <ref>): A coalgebra for a comonad W is a set S together with a multiplication function β : S → W S, that makes the following diagrams commute: S WW S S W S WS WW S S ["β"description, from=1-1, to=1-3] ["β"description, from=1-1, to=3-1] ["W β"description, from=1-3, to=3-3] ["δ"description, from=3-1, to=3-3] ["β"description, from=1-4, to=1-6] ["ε"description, from=1-6, to=3-6] ["id"description, from=1-4, to=3-6] We are now ready to present the definition of a bialgebra[ Defined in <cit.>. See also <cit.>. ]: Let M be a monad, and W be a comonad. A (M, W)-bialgebra is a set S equipped with three functions: α : M S → S β : S → M S γ : M W S → W M S Such that (S, α) is an M-algebra, (S, β) is a W-coalgebra, and the following diagram commutes: M W S W M S M S W S S ["α"description, from=2-1, to=3-3] ["β"description, from=3-3, to=2-5] ["M β"description, from=2-1, to=1-2] ["M α"description, from=1-4, to=2-5] ["γ"description, from=1-2, to=1-4] We are going to be intereseted in the case where the monad and the comonad is the same functor M. Observe now, that for every set X, the set M X equipped with the μ operation forms an M-algebra, called the free algebra over X. (In this case the axioms of an algebra coincide with the axioms of a monad). Similarly, the set M X equipped with the δ operation, forms an M-coalgebra, called the free coalgebra. We are now ready to specify the flatten-expand axiom in terms of a bialgebra. It states that for every X, the set M X equipped with μ (i.e. the free algebra structure), δ (i.e. the free coalgebra structure), and γ defined as the following composition, is a bialgebra: γ : M M M X δ M M M M X M⟨ M , ⟩ M (M M X × M M X) M M M (M M X × M X) M M M M M M M M X μ M M M X After unfolding the bialgebra definition, this means that the following diagram commutes: M M M X M M M X M M X M M X M X["μ"description, from=2-1, to=3-3] ["δ"description, from=3-3, to=2-5] ["M δ"description, from=2-1, to=1-2] ["M μ"description, from=1-4, to=2-5] ["γ"description, from=1-2, to=1-4] Using basic equational reasoning, we can show that the top path in this diagram is equal to the top path in the flatten-expand axiom. (This is formalized as , see <ref>.) It follows that the bialgebraic formulation is equivalent to the flatten-expand axiom. §.§ Omitted proofs from <ref> <ref> For every f, g ∈ C_A, it holds that f ∘ g ∈ C_A. We start the proof by defining the following operation : M A × M A → M A: M A × M A M η_A ×𝕀 M M A × M A _M A M M A μ_A M A The intuition behind this operation is that it overrides the focused element, with the given element of M A. For more intuition, consider the following example in L and observe that the element 3 disappears: ([1, 2, 3], [4, 5, 6]) = [1, 2, 4, 5, 6] In the setting of lists this overriding behaviour might seem counter-intuitive, as there already is a more natural definition of concatenation. However, the overriding behaviour is clearly defined for all Ms, and the usual definition of concatenation does not seem to be generalizable (for example for M's such as L̅ and T_𝒮). For example, in L̅, works as follows: ([1, 2, 3], [4, 5, 6] ) = [1, 4, 5, 6, 2, 3] Using the put-assoc axiom, one can show the context of concatenation is equal to the composition of contexts (this is verified as in Coq, see <ref>): For every k, l ∈ M A, it holds that: _k ∘_l = _(k, l) This finishes the proof of the <ref>. §.§ Wreath Product In this section, we show how to compose two L-transductions using the usual definition of a wreath product <cit.>. This serves two purposes: the first one is to relate the generalized wreath product from <ref> with the classical wreath product, the second one is to give more intuitions about the proof of <ref>. Remember that a L-transduction of type Σ^+ →Γ^+ is given by a semigroup S, and functions h : Σ→ S, λ : S →Γ, and is computed according to the following formula: w_1 … w_n →λ(h(w_1)), …, λ(h(w_1) ·…· h(w_n)) We are given two L-transductions F_1 : Σ^* →Γ^* and F_2 : Γ^* →Δ^*, given by (S_1, h_1, λ_1) and (S_2, h_2, λ_2), and we would like to construct (S_3, h_3, λ_3), such that their L-transduction computes the composition Σ^* F_1Γ^* F_2Δ^*. As mentioned before, for S_3, we are going to use the wreath product of S_1 and S_2. The intuition behind S_3 is that it represents the S_1- and S_2-products for every possible infix. Before we define S_3, let us show what it means to compute the S_2-product of an infix from Σ^*. For example, consider the following infix w ∈Σ^*: unknown preffix a_1 a_2 a_3 a_4 unknown suffix In order to compute the S_2-product of w, we need to first transform it using F_1. This is slightly problematic, as the output of F_1 will usually depend on the unknown prefix that comes before w. However, the only information we need about that prefix is its S_1-product. For example, if we know that S_1-product of the prefix is equal to s, we can compute the output of F_1 on w as follows (where s_i := h_1(a_i)): unknown preffix λ_1(s · s_1) λ_1(s · s_1 · s_2) λ_1(s · s_1 · s_2 · s_3) λ_1(s · s_1 · s_2 · s_3 · s_4) unknown suffix Now we can compute the S_2-product of w by applying h_2 to each letter and multiplying the results (in S_2): h_2(λ_1(s · s_1)) · h_2(λ_1(s · s_1 · s_2)) · h_2(λ_1(s · s_1 · s_2 · s_3)) · h_2(λ_1(s · s_1 · s_2 · s_3 · s_4)) According to this reasoning, the S_2 value of an infix can be represented as a function of the following type (where S_1^1 denotes S_1 adjoined with a formal identity element): S_1^1_Given the S_1-product of the preffix (where 1 represents the empty preffix)→S_2_What is the S_2-product of the infix? We are now ready to define S_3 as the following set: S_1_The S_1-value of the infix×S_1^1 → S_2_The S_2-value of the infix as explained above The product operation on S_3 is defined with the following formula, which follows from our definition of the S_2-values: (s, f) · (t, h) = (s · t, x ↦ f(x) · g(x · s)) It is now not hard to see that S_3 equipped with the following h_3 and λ_3 recognizes the composition F_2 ∘ F_1: h(a) = ( h_1(a), x ↦ h_2(λ_1 (x · h_1(a))) ) λ(x, f) = λ_2(f(1)) Let us finish this section by comparing this definition of a wreath product, with our definition of the generalized wreath product from <ref>, where it is defined as: S_1 × (S_1^S_1→ S_2) Remember that the set S_1^S_1 is meant to represent S_1-contexts, so we can think of this type as S_1 × (C_S_1→ S_2). (Actually, this is how we could have defined the generalized wreath product from the beginning. We used the more general definition for the sake of simplifying the definition of the product, and the (formal) proof of associativity, but we believe that this finer definition would work as well). Since, as explained in <ref>, for M = L the set of contexts C_S is isomorphic to S^1, this can be further simplified to S_1 × (S_1^1 → S_2), which coincides with the definition of the wreath product presented in this section. § OMITTED DETAILS FROM <REF> §.§ General Cartesian closed categories In this section we discuss possible strategies and obstacles for generalizing the results of this paper from to arbitrary Cartesian closed categories. We say that a category is called Cartesian closed if it admits products X × Y and function spaces X ⇒ Y (see <cit.> for the full definition). As it turns out, our -formalization of the results mostly uses a subset of λ-calculus that can be automatically translated into morphism in every Cartesian closed category (see <cit.>). The only exception is the function , which is defined in the following way: (x, l) = (M (λ y . (x, y))) l This causes problems, because in the Cartesian closed categories, the functor M can only be applied to arrows of the category (i.e. X → Y) and not to the exponent objects (i.e. X ⇒ Y). (In particular, not every functor in every Cartesian closed category has a strength.) One way to deal with this problem is to require that the functor M should come equipped with a (smilarly to how it comes equipped with a function), and axiomatize its expected behaviour. We have tried this approach with the usual axioms of a strong functor, strong monad, and a strong comonad[We took the axioms of a strong functor and a strong monad from <cit.>. For the axioms of a strong comonad, we took the duals of the axioms for a strong monad.], but we were not able to prove <ref> within this axiomatization. Here is an example of a rather basic property, that does not seem to follow from this usual set of axioms: M X (X⇒ X) × M X M ( (X ⇒ X) × X) M X["id"description, from=1-1, to=3-5] ["⟨𝚌𝚘𝚗𝚜𝚝_𝚒𝚍, id⟩", from=1-1, to=1-3] ["𝚜𝚝𝚛𝚎𝚗𝚐𝚝𝚑", from=1-3, to=1-5] ["M 𝚎𝚟𝚊𝚕"description, from=1-5, to=3-5] In the diagram 𝚎𝚟𝚊𝚕 : (Y ⇒ X) × Y → X denotes the function application (from the definition of a Cartesian closed category), and 𝚌𝚘𝚗𝚜𝚝_𝚒𝚍 : Z → (X ⇒ X) denotes an arrow that maps every argument to the identity function (formally, this is defined as 𝚌𝚘𝚗𝚜𝚝_𝚒𝚍 = Λ(π_2), where Λ comes from the definition of a Cartesian closed category and π_2 : Z × X → X is the second projection). Next, we tried adding this diagram as one of the axioms and proving <ref>. However we have encountered other problems. So, for the sake of simplicity, we have decided to restrict the scope of this paper to . However, we believe that it should be possible to find an axiomatization of that would admit a proof of <ref>. Moreover, it is possible that such an axiomatization already exists in the literature and we were simply not able to find it. We would welcome any suggestion of such an axiomatization. § FORMALIZATION IN COQ In this section, we present the framework of our Coq formalization, focusing on key definitions and the statements of main lemmas. To streamline our exposition, we exclude the formal proofs, and certain auxiliary lemmas deemed peripheral to our core arguments. The entire Coq file is available under the following link[ <https://github.com/ravst/MonadsComonadsTransducersCoq> ]. §.§ Modelling In this section, we show how we have modelled our theory in Coq. We start by fixing the functor M: [language=Coq] Parameter M : Type -> Type. Parameter mapM : forall A B, (A -> B) -> M A -> M B. Notation " # f " := (mapM f). Here is the mapping on sets, and is the mapping on functions. We also introduce a notation for the mapping on sets, where M f is written as #𝚏. Next, we assert the axioms of a functor: [language=Coq] Axiom mapCompose : forall A B C (f : B -> C) (g : A -> B) (mx : M A), (#f)((#g) mx) = (#(compose f g)) mx. Axiom mapId : forall A (x : M A), (# id) x = x. In a similar fashion, we assert the monad structure on M: [language=Coq] (*Flatten operation*) Parameter mult : forall A, M (M A) -> M A. (*Singleton operation*) Parameter unit : forall A, A -> M A. (*Monad operations are natural*) Axiom multNatural : forall A B, forall f : A -> B, forall x, mult((#(#f)) x) = (#f)(mult x). Axiom unitNatural : forall A B, forall f : A -> B, forall x, unit(f x) = (#f)(unit x). (*Monad operations satisfy monad axioms*) Axiom multAx : forall A (x : M (M (M A))), mult (mult x) = mult ((#mult) x). Axiom multMapUnitAx : forall A (x : M A), mult ((# unit) x) = x. Axiom multUnitAx : forall A (x : M A), mult (unit x) = x. Next, we assert the comonad structure on M: [language=Coq] (*Expand*) Parameter coMult : forall A, M A -> (M (M A)). (*Extract*) Parameter coUnit : forall A, M A -> A. (*Comonad operations are natural*) Axiom coUnitNatural : forall A B, forall f : A -> B, forall x, f (coUnit x) = (coUnit ((#f) x)). Axiom coMultNatural : forall A B, forall f : A -> B, forall m, coMult ((#f) m) = (#(#f)) (coMult m). (*Comonad operations satisfy comonad axioms*) Axiom coMultAx : forall A (x : M A), coMult (coMult x) = (#coMult) (coMult x). Axiom coUnitCoMultAx : forall A (x : M A), coUnit (coMult x) = x. Axiom mapCoUnitComultAx : forall A (x : M A), (#coUnit) (coMult x) = x. Next, we introduce the operation, and assert that it is natural: [language=Coq] Parameter put : forall A, ((M A) * A) -> M A. (*Put is natural*) Axiom putNatural : forall A B (f : A -> B) (xs : M A) (x : A), (#f) (put (xs, x)) = put ((#f) xs, f x). Then, we introduce the coherence axioms. In the following code, we use the notation <| 𝚏, 𝚐|> for the pairing of two functions ⟨ f, g ⟩, defined as ⟨ f, g ⟩ x = (f x, g x), and the notation * for function composition. [language=Coq] Axiom flattenExtract : forall A, forall (x : M (M A)), coUnit (mult x) = coUnit (coUnit x). Axiom singletonExpand : forall A, forall (x : A), coMult (unit x) = (#unit) (unit x). Axiom singletonExtract : forall A, forall (x : A), coUnit (unit x) = x. Axiom getPut : forall A, forall (x : M A) (y : A), coUnit (put (x, y)) = y. Axiom putGet : forall A, forall (x : M A), put (x, coUnit x) = x. Axiom putPut : forall A l (x : A) y, put (put(l, x), y) = put (l, y). Axiom putAssoc : forall A, forall (x : M (M A)) (y: M A) (z : A), put ((mult (put (x, y))), z) = mult (put (x, put (y, z))). Axiom singletonPut : forall A (x : A) y, put(unit x, y) = unit y. (*The Set-specific definition of strength*) Definition str X Y (x : X * (M Y)) : (M (X*Y)) := match x with (x1, x2) => (#(fun y => (x1, y))) x2 end. Axiom flattenExpand : forall A (x : M (M A)), coMult (mult x) = mult ( (#(# mult)) ( ((#(#put)) ((# str) ((#<| id, coMult * coUnit|>) (coMult x)))))). Next, we define the properties of an algebra: [language=Coq] Definition associative S (alpha : M S -> S) : Prop := forall l, alpha ((#alpha) l) = alpha (mult l). Definition unitInvariant S (alpha : M S -> S) : Prop := forall s, alpha (unit s) = s. Finally, we define an M-definable transduction: [language=Coq] Definition mTransduction X Y S (alpha : M S -> S) (h : X -> S) (lambda : S -> Y) : M X -> M Y := (#lambda) * (#alpha) * coMult * (#h). §.§ The composition theorem In this section, we present the formal statement of the composition theorem. We start with the context: two M-definable transduction F : M X → M Y and G : M Y → M Z: [language=Coq] (*We are given three alphabets *) Variable X : Set. Variable Y : Set. Variable Z : Set. (*We are given and M-transduction F : M X -> M Y*) Variable S1 : Set. Variable prod1 : M S1 -> S1. Variable h1 : X -> S1. Variable lambda1 : S1 -> Y. Axiom assoc1 : associative prod1. Axiom unitInvariant : unitInvariant prod1. Definition F := mTransduction prod1 h1 lambda1. (*And we are given an M-transduction G : MY -> M Z*) Variable S2 : Set. Variable prod2 : M S2 -> S2. Variable h2 : Y -> S2. Variable lambda2 : S2 -> Z. Axiom assoc2 : associative prod2. Axiom unitInvariant2 : unitInvariant prod2. Definition G := mTransduction prod2 h2 lambda2. Next, we define the generalized wreath product of S1 and S2, and use it to define a new M-definable transduction GF : M X → M Z: [language=Coq] Definition S3 : Type := S1 * ((S1 -> S1) -> S2). Definition prod3 (l : M S3) : S3 := let ctx1 (l : M S1) (x : S1) : S1 := prod1 (put (l, x)) in let tmp1 (l : M S3) : ((S1 -> S1) -> S2) := proj2 (coUnit l) in let tmp2 (c : S1 -> S1) (l : M S3) : (S1 -> S1) := c * (ctx1 ((#proj1)(l))) in (prod1 ((#proj1) (l)), fun c => prod2 ( (#app2) (((#<|tmp1, tmp2 c |>) (coMult l))))). Definition h3 (x : X) : S3 := (h1 x, fun c => h2 (lambda1 (c (h1 x)))). Definition lambda3 (s : S3) : Z := match s with | (_, f) => lambda2 (f (fun a => a)) end. Definition GF := mTransduction prod3 h3 lambda3. Next, we prove that GF is equal to G ∘ F. (As mentioned before, we omit the proof in this paper, but it is available in the Coq file.) [language=Coq] Theorem compositionCorrect : GF = G * F. Finally, we prove that S3 is a valid algebra (again, we omit the proofs): [language=Coq] Theorem S3Associative : associative prod3. Theorem S3UnitInvariant : unitInvariant prod3. §.§ Contexts In this section, we present the formalization of the results from <ref>. We start with an algebra S: [language=Coq] Variable S : Set. Variable prod : M S -> S. Axiom prodAssoc : associative prod. Axiom prodUnit : unitInvariant prod. We define contexts: [language=Coq] Definition ctx (l : M S) (x : S) : S := prod (put (l, x)). And we prove the required lemmas, respectively Lemmas <ref>, <ref>, and <ref>. Here <* 𝚏, 𝚐*> denotes the function f × g, defined as (f × g)(x, y) = (f x, g y). [language=Coq] Lemma ctxPutInvariant : forall l a, ctx l = ctx (put (l, a)). Lemma ctxUnitId : forall x, ctx (unit x) = id. Definition concat : M S * M S -> M S := mult * put * <* (#unit), id *>. Lemma concatCtx : forall (v w : M S), ctx v * ctx w = ctx (concat (v, w)). §.§ Flatten-expand axiom Finally, we formalize the equivalence of the flatten-expand axiom and the bialgebraic formulation (see <ref>). The left-hand side of the equality is the top path in the flatten-expand axiom, and the right-hand side is the top path in the bialgebraic formulation: [language=Coq] Theorem flattenExpandAltThm : forall A, forall (x : M (M A)), mult ( (#(# mult)) ( ((#(#put)) ((# str) ((#<| id, coMult * coUnit|>) (coMult x)))))) = ((#mult) * mult * (#(# put)) * (# str) * (#<| #coUnit, coUnit|>) * coMult * (#coMult)) x.
http://arxiv.org/abs/2407.03031v1
20240703115102
Entangled pairs in evaporating black holes without event horizons
[ "Ivan Agullo", "Paula Calizaya Cabrera", "Beatriz Elizaga Navascués" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2407.02313v1
20240702144610
A Casimir-like probe for 4D Einstein-Gauss-Bonnet gravity
[ "Syed Masood", "Said Mikki" ]
gr-qc
[ "gr-qc" ]
syed@intl.zju.edu.cnthamersyed@gmail.com smikki@illinois.edu ^1Zhejiang University/University of Illinois at Urbana-Champaign Institute (the ZJU-UIUC Institute), Zhejiang University, 718 East Haizhou Road, Haining 314400, China. ^2Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana IL 61801, USA § ABSTRACT Virtual transitions in a Casimir-like configuration are utilized to probe quantum aspects of the recently proposed four-dimensional Einstein-Gauss-Bonnet (4D EGB) gravity. This study employs a quantum optics-based approach, wherein an Unruh-DeWitt detector (modeled as a two-level atom) follows a radial timelike geodesic, falling freely into an uncharged, nonrotating black hole described by 4D EGB gravity, becoming thermalized in the usual Unruh manner. The black hole, asymptotically Minkowskian, is enclosed by a Casimir boundary proximate to its horizon, serving as a source for accelerated field modes that interact with the infalling detector. Observations are conducted by an asymptotic infinity observer, assuming a Boulware field state. Our numerical analysis reveals that, unlike in Einstein gravity, black holes in 4D EGB gravity can either enhance or suppress the intensity of acceleration radiation, contingent upon the Gauss-Bonnet coupling parameter α. Specifically, we observe radiation enhancement for negative α and suppression for positive α. These findings offer substantial insights into quantifying the influence of higher-curvature contributions on the behavior of quantum fields in black hole geometries within a 4D spacetime. A Casimir-like probe for 4D Einstein-Gauss-Bonnet gravity Said Mikki^1,2 July 8, 2024 ========================================================= § INTRODUCTION Considerable efforts have been made over the past few decades to uncover the deep connection between quantum mechanics, gravity, and thermodynamics <cit.>. Among these endeavors, the discovery of Hawking radiation from black holes <cit.> and the Unruh effect for accelerated observers in flat Minkowski spacetime <cit.> stand out as pivotal. Another significant phenomenon is Parker's idea of particle emission due to the expansion of the Universe <cit.>. In all these cases, the quantum state of the field is altered by a dynamic background spacetime geometry or the state of motion, resulting in the creation of real particles—an effect arising from the violation of Poincaré invariance <cit.>. This is similar to the dynamical Casimir effect (DCE) <cit.>, where accelerated plates or boundaries induce the quantum vacuum to radiate particles. Consequently, this scenario fosters a rich intersection of quantum fields, boundaries, and spacetime geometries <cit.>. With the advent of precise experimental and observational setups, it has become possible over the decades to test Einstein's general relativity (GR) in extreme gravity regimes. So far, GR has consistently matched observational data, with milestone achievements including gravitational wave detection <cit.>, black hole shadows <cit.>, and neutron star mergers <cit.>. However, physicists have long recognized that GR cannot address certain fundamental issues in the Universe, such as the existence of singularities, cosmological acceleration, dark matter, and a consistent merger of quantum mechanics and gravity. Thus, it is evident that a framework beyond GR is needed to resolve these challenges <cit.>. Several alternatives to GR predict additional higher-curvature contributions to the gravitational action. A significant framework within this class of models originates from the works of Lanczos <cit.> and Lovelock <cit.>, leading to the well-known Einstein-Gauss-Bonnet (EGB) theory. It has been established that EGB gravity does not introduce modifications to gravitational dynamics unless coupled with additional field degrees of freedom or in spacetime dimensions D≥ 5. One example of such additional fields is the dilaton field <cit.>. In addition to this, EGB gavity theories yield equations of motion that are quadratic in metric tensor. This quadratic nature is a unique feature of EGB gravity among all other alternatives to GR. The interesting coincidence is that the low energy effective descriptions of heterotic string theories also posit quadratic contributions to the dynamics of Einstein gravity <cit.>. It may be noted that the quadratic nature of equations of motion suffice to get rid of Ostrogradsky instability <cit.> and thus guarantees physicality of the dynamics. Furthermore, EGB gravity theories are characterized by equations of motion that are quadratic in the metric tensor. This quadratic nature distinguishes EGB gravity from other alternatives to GR. An intriguing coincidence arises in that the low-energy effective descriptions of heterotic string theories also incorporate quadratic contributions to the dynamics of Einstein gravity <cit.>. Importantly, the quadratic form of the equations of motion resolves the Ostrogradsky instability <cit.>, ensuring the physical viability of the theory. Recently, Glavan and Lin <cit.> addressed the question of Gauss-Bonnet (GB) contributions in 4-dimensional spacetime geometry by proposing a specific rescaling of the GB coupling parameter α→α/(D-4), where D denotes the spacetime dimensionality. This rescaling ensures a well-defined limit as D → 4. The resulting model maintains quadratic behavior to prevent Ostrogradsky instability, yet it departs from the implications of the well-known Lovelock theorem <cit.>. It is noteworthy that no additional field coupling is required in this model. As a new phenomenological competitor to Einstein's General Relativity (GR), this model has sparked rigorous debates over the years. Some investigations include consistency checks <cit.>, studies of black hole shadows and quasinormal modes <cit.>, analysis of geodesics <cit.>, particle accelerator models <cit.>, and a wide array of thermodynamic analyses <cit.>. A comprehensive overview of 4D-EGB gravity, covering its various aspects, can be found in a review article by Fernandes et al. <cit.>. Recognizing the significance of the findings in Ref. <cit.>, we are driven to investigate the potential quantum radiative signatures of 4D EGB gravity using elements from quantum optics and Casimir physics. Our approach involves a quantum optical cavity positioned with one end near a black hole horizon and the other at asymptotic infinity. Within this setup, a two-level Unruh-DeWitt detector (an atom) falls freely towards the black hole. Virtual transitions arising from the interaction between the detector and the field lead to acceleration radiation, which carries distinct imprints of the underlying gravitational background. Such a setup has been discussed in Ref. <cit.>, where it was demonstrated that, under appropriate initial conditions, a detector near a Schwarzschild black hole emits radiation with a thermal spectrum. This unique radiative emission, known as Horizon Brightened Acceleration Radiation (HBAR), occurs when the detector is in free fall towards the black hole. This concept has been further explored in various contexts, revealing profound connections between the equivalence principle, quantum optics, and the Hawking-Unruh effect <cit.>. It also underscores connections to the Dynamical Casimir Effect (DCE) and moving mirror models <cit.>, frequently employed in studying quantum field behavior in curved spacetimes. But while the original work in Ref. <cit.> considers detectors moving along timelike geodesics, subsequent studies have shown that similar phenomena can occur for detectors following null geodesics <cit.>. This novel radiative emission phenomenon can be attributed to the near-horizon physics and conformal quantum mechanics of black holes <cit.>. Given that quantum field dynamics can elucidate the nature of underlying spacetime geometry <cit.>, we view the aforementioned setup as a potential avenue to probe 4D EGB gravity at a deeper level. Through numerical analysis, we demonstrate that 4D EGB gravity can imprint distinct features on the radiation spectrum compared to Einstein's GR, encompassing both negative and positive values of the Gauss-Bonnet coupling parameter. The structure of the paper is as follows. The next Sec. <ref> introduces the basics of 4D EGB black hole geometry, accompanied by discussions on the wave equation and the vacuum field state. In Sec. <ref>, we compute the excitation probability or the detector response function of the falling detector. Sec. <ref> explores possible interpretations of our numerical findings. Finally, conclusions are drawn in Sec. <ref>. § CONCEPTUAL ASPECTS: OUR SPACETIME GEOMETRY AND THE CHOICE OF FIELD MODES The static, spherically symmetric metric of an uncharged and nonrotating black hole in 4D EGB gravity is given by <cit.> ds^2=-f(r) dt^2+1/f(r)dr^2+r^2( dθ ^2+sin^2θdϕ ^2), where[We use natural units c=G=ħ=1 throughout.] f(r)=1+r^2/2α(1±√(1+8α M/r^3)), where ± sign inside brackets denotes Gauss-Bonnet (GB) and GR branches, respectively. Here, we focus solely on the GR branch, as the GB branch is deemed unphysical <cit.>. To determine the event horizon radius, we set f(r)=1+r^2/2α(1-√(1+8α M/r^3))=0, which yields r_±=M^2±√(M^2-α), of which the one with plus sign is the real exterior horizon of the black hole. Thus, our event horizon is located at r_ g = r_± = M^2 + √(M^2 - α). The parameter α can take both positive and negative values within the range -32M^2 ≤α≤ 4M^2, as indicated in Refs. <cit.> (also see <cit.>). It is evident that a positive GB coupling constant α decreases the black hole horizon radius, whereas a negative α increases it. The limit α = 0 corresponds to the Schwarzschild black hole in GR. These relationships are illustrated graphically in Fig. <ref>. We also note that r_+r_- = α, and as r → 0, the metric components remain finite. This can be observed from Eq. (<ref>), where lim_r → 0 f(r) = 1. However, the finiteness of the metric components does not guarantee the absence of singularities due to the fact that the Ricci scalar R and the Kretschmann scalar R_μνσδR^μνσδ vary as R ∝ r^-3/2 and R_μνσδR^μνσδ∝ r^-3, respectively. It should be noted that for the Schwarzschild case, the Kretschmann scalar near r=0 varies as r^-6, indicating that the GB contribution significantly weakens the singularity by several orders of magnitude <cit.>. §.§ Detector trajectories In this section, we analyze the geodesics of the detector to compute both the coordinate time and proper (conformal) time that describe the timelike trajectory of the infalling (massive) detector. Generally, for a given Christoffel connection Γ_ρσ^μ, the complete geodesic equations are expressed as <cit.> d^2 x^μ/dτ^2+Γ_ρσ^μdx^ρ/dτdx^σ/dτ=0. Our spacetime geometry of interest exhibits spherical symmetry, and we restrict our analysis to the radial motion of the detector in the equatorial plane. Therefore, we set θ = π/2, which implies θ̇ = 0 and ϕ̇ = 0. Consequently, the following conservation equations hold: (dr/dτ)^2=ℰ^2-f(r), (dr/dt)^2=[f(r)/ℰ]^2[ℰ^2-f(r)]. Note that ℰ is a constant representing the specific energy of the detector. It is determined by the initial boundary conditions of the geodesic motion, given by ℰ^2 = f(r) |_max. Since we assume that the detector started its motion from asymptotic infinity, where the spacetime is asymptotically Minkowski flat (r →∞ implies f(r) |_max = 1), these constraints from the above equations lead to (dr/dτ)^2=1-f(r), (dr/dt)^2=f^2(r)[1-f(r)]. It should be emphasized that ℰ, which is related to the maximum of f(r), is the same for both GR and 4D EGB gravity. This value of ℰ corresponds to asymptotic infinity, where both GR and 4D EGB theories reproduce flat Minkowski geometry. Now, integrating Eq. (<ref>) along the radial trajectories from some arbitrary initial point r_ i to a final point r_ f (where r_ i > r_ f), we obtain τ =- ∫_r_ i^r_ fdr/√(1-f(r)), t=-∫_r_ i^r_ fdr/f(r)√(1-f(r)). We now substitute Eq. (<ref>) into Eq. (<ref>) in order to compute τ, resulting in τ=2 r √(√(1+8 α M/r^3)-1)tan ^-1(√(√(1+8 α M/r^3)-1)/√(2))-2 √(2) r/3 √(r^2 (√(1+8 α M/r^3)-1)/α)+τ_0. Here, τ_0 serves as an integration constant, the insignificance of which we establish for the final detector response, as detailed in Sec. <ref>. However, the complexity of the integral for t precludes straightforward analytical computation. Consequently, we resort to numerical methods and present the outcomes in Sec. <ref>. Fig. <ref> illustrates the plots of τ and t. The plots clearly illustrate that t and τ exhibit typical Schwarzschild-like behavior. Specifically, t, which represents the time measured by an asymptotic observer, diverges as the detector approaches the black hole horizon, located at zero on the rescaled radial coordinate r - r_ g. This divergence signifies that, from the perspective of this observer, the detector never actually crosses the horizon. In contrast, τ remains finite at the horizon r - r_ g, indicating that from the detector's own frame of reference, it crosses the horizon in a finite amount of proper time. This disparity highlights the causal structure of black hole horizons and is recognized as gravitational time dilation. Furthermore, in 4D EGB gravity, the coupling parameter α influences the behavior of t and τ. For positive α, which reduces the black hole size as discussed in Sec. <ref>, it takes longer for the detector to approach the horizon as α increases. Conversely, for negative α, which inflates the black hole size, the situation is reversed. §.§ Defining the vacuum state The response function, or excitation probability, to be calculated in Sec. <ref>, quantifies the detector-field coupling. To achieve this, we must obtain the appropriate field mode by solving the wave equation on the specified spacetime background. Here, we consider the simplest test field: a massless spin-0 Klein-Gordon field, minimally coupled to the spacetime geometry, described by ∇_μ∇^μΦ = 0 <cit.>. Given the spherical symmetry of the spacetime and the presence of a timelike Killing vector ∂_t, we have Φ=1/rY_l(θ,ϕ)ψ(t,r), with Y_l denoting spherical harmonics and l representing the multipole number. The radial part of the solution, after neglecting the angular dependence (l=0), satisfies the following Schrödinger-like wave equation (-∂^2 /∂ t^2+∂^2 /∂ r_*^2)ψ(t,r)=V(r)ψ(t,r). Here, r_* denotes the Regge-Wheeler tortoise coordinate, a useful parameter for describing the propagation of test fields in black hole geometries, defined by <cit.> r_* =∫dr/f(r) =∫dr/1+r^2/2α(1-√(1+8α M/r^3)), where we utilized Eq. (<ref>). Additionally, V(r) represents the effective potential experienced by the field, often describing scattering effects in black hole spacetimes <cit.>. However, given our focus on the simplest scenario possible, as also demonstrated in Refs. <cit.>, V(r) can be neglected. One approach to achieve this is by assuming that the frequency ν of the field mode is sufficiently large, enabling it to surmount the potential barrier imposed by the spacetime. Consequently, the field mode simplifies to ψ(t,r)=exp[iν(t- r_*)]. This represents a normalized outgoing field mode with frequency ν, as observed by an asymptotic infinity observer, qualifying as a Boulware field state. The ingoing field modes generated propagate towards the boundary at the black hole horizon and are lost. The Boulware field mode described above is an approximate field state obtained by neglecting V(r) and assuming ν to be very large. This assumption serves as one of the initial conditions required for the existence of HBAR emission <cit.>. Generally, in the context of black holes, multiple vacuum states are utilized due to the absence of a unique vacuum state in curved spacetime. This leads to various notions of vacuum states, such as the Unruh vacuum, Hartle-Hawking vacuum <cit.>, and others. In principle, there should be an infinite number of possible vacuum states due to the violation of Poincaré invariance in curved spacetimes <cit.>. In contrast, for Minkowski space, where the field satisfies Poincaré invariance, the vacuum state remains same for all inertial observers. In our scenario, the choice of the Boulware vacuum state arises because the observations are made by an asymptotic observer, for whom the Boulware field state is most appropriate. In this context, no Hawking radiation is detected by the observer. Moreover, the black hole is assumed to be entirely enclosed by a Casimir boundary, which effectively prevents any potential Hawking quanta from mixing with HBAR flux <cit.>. This distinction ensures that HBAR emission is fundamentally different from Hawking radiation. Additionally, we have excluded l=0 modes for simplicity. However, considering a smaller ν such that V(r) ≠ 0 would lead to the emergence of scattering effects, potentially necessitating the inclusion of greybody factors <cit.>. Nevertheless, we argue that such inclusions would lead to the deviation from the primal essence of HBAR emission, which occurs under specific boundary conditions as emphasized in Refs. <cit.>. § DETECTOR RESPONSE As discussed in the preceding section, the field is in a Boulware vacuum state, ensuring that no Hawking radiation is observed by the asymptotic observer. By neglecting the angular dependence of the field modes, the detector-field interaction Hamiltonian can be expressed as follows <cit.>: Ĥ(τ) =ħ g [â_νψ[t(τ),r(τ)]+H.C.][σ̂(τ)e^-iωτ+H.C.]. Here, â_ν is the annihilation operator for the field modes, σ̂ is the detector lowering operator, and H.C. denotes the Hermitian conjugate. Here, g is a detector-field coupling parameter indicating the strength of the interaction and can be taken as a constant for a massless Klein-Gordon field (spin-0). Assuming that the detector is initially in the ground state |b⟩, the probability that it transitions to an excited state |a⟩ with the emission of a field quantum of frequency ν is given by Γ_ exc=1/ħ^2|∫dτ ⟨ 1_ν,a|H(τ)|0,b⟩|^2. Utilizing time-dependent perturbation theory, such a process is typically prohibited in quantum optics due to energy conservation principles. However, in non-inertial frames influenced by acceleration and gravity, these virtual processes can occur owing to counter-rotating terms in the Hamiltonian <cit.>, as exemplified by the Unruh effect <cit.>. By employing Eq. (<ref>) and performing some additional straightforward computations, Eq. (<ref>) can be reexpressed as Γ_ exc =g^2|∫dτ ψ^*(t(τ),r(τ))e^iωτ|^2 =g^2|∫dr (dτ /dr) ψ^*(r)e^iωτ|^2. Simplifying further, we arrive at Γ_ exc = g^2|∫_∞^r_ gdr exp[iν{t(r)- r_*(r)}]1/√(r^2/2α( √(1+8α M/r^3)-1))exp[iω{2 r √(√(1+8 α M/r^3)-1)tan ^-1(√(√(1+8 α M/r^3)-1)/√(2))-2 √(2) r/3 √(r^2 (√(1+8 α M/r^3)-1)/α)}]|^2, which results in a complex expression involving nested integrals with respect to t(r) and r_*. It's important to note that the limits of integration correspond to the detector's trajectory from r = ∞ to r = r_ g, the horizon of the black hole. Thus, from Eqs. (<ref>) and (<ref>), we derive: t(r)=-∫_∞^r_ gdr /[1+r^2/2α(1-√(1+8α M/r^3))] √(r^2/2α( √(1+8α M/r^3)-1)), r_* =∫_r_ g^∞dr /1+r^2/2α(1-√(1+8α M/r^3)). Consider now the substitution r = r_ gz, where dr = r_ gdz. Using this transformation of variables, we may rewrite t(r) in Eq. (<ref>) as follows: t(z)=-∫_∞^1dz r_ g/[1+r_ g^2z^2/2α(1-√(1+8α M/r_ g^3z^3))] √(r_ g^2z^2/2α( √(1+8α M/r_ g^3z^3)-1)). A further substitution of the form x=z-1, such that z=x+1, yields t(x)=∫_0^∞dx r_ g/[1+r_ g^2(x+1)^2/2α(1-√(1+8α M/r_ g^3(x+1)^3))] √(r_ g^2(x+1)^2/2α( √(1+8α M/r_ g^3(x+1)^3)-1)). One can follow a similar calculation for r_*, arriving at r_*(x) =∫_0^∞dx r_ g/1+[r_ g(x+1)]^2/2α(1-√(1+8α M/[r_ g(x+1)]^3)) . After deploying all the relevant quantities in Eq. (<ref>), we derive the following final expression for the detector excitation: Γ_ exc = g^2 r_ g^2|∫_0^∞dx exp[iν{t(x)- r_*(x)}]𝒢/√((r_ g[x+1])^2/2α( √(1+8α M/(r_ g[x+1])^3)-1))|^2, where 𝒢= exp[iω{2 r_ g(x+1) √(√(1+8 α M/(r_ g[x+1])^3)-1)tan ^-1(√(√(1+8 α M/(r_ g[x+1])^3)-1)/√(2))-2 √(2)[r_ g(x+1)]/3 √((r_ g[x+1])^2 (√(1+8 α M/(r_ g[x+1])^3)-1)/α)}]. This represents the primary outcome of our investigation. The numerical integral in (<ref>) is notably intricate, demanding a careful approach for its accurate computation. To achieve that, in what follows we deploy the numerical integration capabilities of the Mathematica symbolic math package for performing all required calculations. The figures presented in Fig. <ref> were generated using optimized settings. § RESULTS AND DISCUSSIONS Based on the preceding analysis, the two-level Unruh-DeWitt detector, operating in the Boulware vacuum state, registers detections while in free fall (inertial). This observation appears to challenge established field-theoretic concepts associated with the Hawking-Unruh effect. Specifically, there is no emission of Hawking radiation in the Boulware state as observed from asymptotic infinity, nor does the Unruh effect manifest for inertial detectors in the Minkowski vacuum. However, HBAR emission from detectors operates on different principles <cit.>. While it shares similarities with Hawking radiation, such as the thermal nature of the emitted flux and the associated Bekenstein-Hawking entropy-area correspondence, there are also distinct characteristics. Notably, HBAR emission involves the evolution of field modes in pure states and includes phase correlations between them. These aspects naturally relate to the black hole information paradox <cit.>. In Fig. <ref>, we present the detector excitation probability, Γ_ exc, plotted as a function of the emitted radiation frequency, ν. The impact of the GB coupling parameter, α, is depicted in Figs. <ref>(a) and <ref>(b) for positive and negative values of α, respectively. Fig. <ref>(c) illustrates how the detector transition frequency, ω, influences Γ_ exc, while Fig. <ref>(d), shown on a log-log scale, highlights the behavior of Γ_ exc near the origin and its convergence at higher frequencies. It is important to note that our interpretations and analyses are based on numerical estimations detailed in the preceding sections. These figures provide a comprehensive view of the radiative characteristics under consideration, elucidating the role of α and the detector's transition frequency in shaping Γ_ exc. From all plots, one of the prominent features observed is the thermal nature of the HBAR radiation flux, characterized by a Bose-Einstein (BE) distribution. This observation leads us to conclude that 4D Einstein-Gauss-Bonnet (EGB) gravity does not alter the thermal nature of the flux, consistent with earlier findings <cit.> in the context of Einstein gravity. This characteristic mirrors the thermal emission observed in Hawking radiation from pure black holes with asymptotically flat geometries. It is noteworthy that for the so-called “dirty” black holes beyond the Kerr-Newman family, such as in the de Sitter case, there exists the possibility of observing a nonthermal spectrum <cit.>. The detector excitation probability Γ_ exc, as observed in Fig. <ref>(a), decreases with increasing positive values of α and increases with negative values of α. As previously discussed, positive α reduces the size of the black hole horizon [see Fig. <ref>(b)], leading to the conclusion that smaller black holes emit less radiation flux compared to larger ones. This reasoning can similarly be applied to negative values of α. It is crucial to emphasize that in the limit α→ 0, depicted in Fig. <ref>(a), the scenario converges to that of the pure Schwarzschild black hole. The attenuation and augmentation of particle production can be conceptually grasped as follows. Particles generated within black hole spacetimes, as in Hawking radiation, experience backreaction due to the gravitational tidal forces exerted by the black hole. This backreaction diminishes the intensity of the radiation. Tidal effects in black holes stem from their surface gravities, which are directly related to their horizon radii. Specifically, for a black hole with a horizon radius r_g, the surface gravity varies inversely with the square of r_g. This relationship implies that larger black holes have smaller surface gravities and correspondingly weaker tidal effects, and conversely, smaller black holes exhibit stronger tidal effects. In the context of positive α, the black hole horizon size decreases monotonically, leading to stronger surface gravity and tidal effects compared to a Schwarzschild black hole (α = 0). Consequently, particles generated within such spacetimes experience heightened backreaction, impeding their propagation. This results in fewer particles escaping to asymptotic infinity, thereby reducing the intensity of the particle spectrum, as illustrated in Fig. <ref>(a). Conversely, for negative values of α, the black hole horizon radius increases, indicating reduced backreaction and tidal forces. This condition allows more particles to escape from the black hole spacetime, leading to an enhancement in the radiation flux, as depicted in Fig. <ref>(b). This constitutes the primary finding of our study, distinguishing 4D EGB gravity from Einstein GR. To investigate the influence of the detector transition frequency ω on the radiation intensity, we plot Γ_ exc against ω in Fig. <ref>(c). The graphs clearly demonstrate a decrease in radiation intensity as ω increases. This behavior aligns with the principles of the Unruh effect <cit.> and can be understood in terms of energy conservation: higher detector transition frequencies require more energy to excite the detector, resulting in a lower excitation probability, and vice versa. Furthermore, to gain insight into the spectrum's behavior at low and high frequencies (ν), we reexamined Γ_ exc from Fig. <ref>(a) using a log-log scale. It is evident that the spectrum exhibits a finite Bose-Einstein (BE)-type distribution near the origin where ν→ 0. As ν increases, the spectrum converges and exhibits a thermal tail, characteristic of a BE or Planckian distribution. In the meantime, it is pertinent to consider the testable implications of this study. We can draw insights from analog gravity systems <cit.>, which have been actively utilized over the last few decades to explore quantum field-theoretic phenomena in curved spacetimes. These systems have provided valuable analogs for understanding effects such as Hawking radiation <cit.>, the Unruh effect <cit.>, and Parker particle generation in expanding spacetimes <cit.>. Most of these setups, whether based on condensed matter or quantum optical systems, are closely tied to Casimir physics involving moving boundaries. This connection is particularly relevant to our study, which is largely inspired by these concepts. Recently, condensed matter systems have been utilized to explore phenomena extending beyond particle generation in exotic backgrounds, including applications to fluid/gravity correspondence <cit.>. Looking forward, there is potential for future tabletop experiments to simulate black hole horizons resembling those in 4D EGB gravity <cit.> or to investigate HBAR radiation scenarios <cit.>, where our findings could offer valuable insights. § CONCLUSION AND OUTLOOK The exploration of theories beyond Einstein gravity has evolved in parallel with general relativity (GR) itself. These modified or extended gravity theories aim to tackle fundamental cosmological issues such as cosmic acceleration, singularities, and dark matter. Among these models, Einstein-Gauss-Bonnet (EGB) gravity stands out, predicting higher-curvature corrections to the Einstein-Hilbert action. These corrections arise either in higher dimensions or through additional field couplings to the gravitational action. Interestingly, these contributions also emerge in the low-energy effective description of heterotic string theory. The 4D EGB theory represents a novel gravitational model that has sparked intense debate since its inception several years ago. This model predicts the presence of a Gauss-Bonnet (GB) term within 4D spacetime, which would otherwise not contribute to the latter's geometry. Its ability to provide a nontrivial contribution is achieved through a redefinition (rescalling) of the GB parameter <cit.>. Importantly, this theory circumvents the Lovelock theorem and sidesteps Ostrogadsky instability, ensuring that the resulting gravitational dynamics remain quadratic. The theory has been scrutinized across various phenomenological fronts. In this paper, we examined the quantum radiative properties of a nonrotating, uncharged black hole in 4D Einstein-Gauss-Bonnet (EGB) gravity using a Casimir-type configuration. The black hole, surrounded by a reflecting mirror, induced accelerated field modes from the Boulware vacuum state. We analyzed the interaction of these field modes with a freely falling two-level Unruh-DeWitt detector, which exhibited characteristic clicking behavior akin to the Unruh effect. The spectrum detected by the detector follows a Bose-Einstein (BE) distribution, with a notable dependence on the GB parameter α. By examining both positive and negative values of α, we studied their influence on the radiation intensity emitted by the detector. We observed that radiation intensity diminishes when α is positive. This reduction is attributed to the shrinking of the black hole size caused by positive α. Conversely, for negative α, we observed an increase in radiation intensity. The reduction or augmentation of the radiation flux is examined in relation to a pure Schwarzschild black hole, where the limit α→ 0 is considered. Additionally, we observed that the transition frequency of the detector reduces the profile of particle creation due to the high energy needed for its excitation, consistent with the standard predictions of the Unruh effect. Finally, the spectrum is finite near the origin and monotonically converges at the high end of the frequency ranges, yielding the distinctive thermal tail characteristic of a Bose-Einstein or Planckian distribution. Our work provides an opportunity to explore various aspects of 4D EGB gravity by incorporating different energy-matter distributions around the simplest black hole model possible. Moreover, exploring other types of detector-field couplings could yield valuable insights into the nature of field configurations within the context of 4D EGB gravity. These and similar questions constitute promising extensions of this work, which we plan to pursue in the future. apsrev4-1
http://arxiv.org/abs/2407.02403v1
20240702162144
Face Reconstruction Transfer Attack as Out-of-Distribution Generalization
[ "Yoon Gyo Jung", "Jaewoo Park", "Xingbo Dong", "Hojin Park", "Andrew Beng Jin Teoh", "Octavia Camps" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Y. Jung et al. Northeastern University AiV Co. Anhui University Hanwha Vision Yonsei University Face Reconstruction Transfer Attack as Out-of-Distribution Generalization Yoon Gyo JungEqual contribution1 Jaewoo Park12 Xingbo Dong3 Hojin Park4 Andrew Beng Jin Teoh5 Octavia Camps1 July 8, 2024 ==================================================================================================================== § ABSTRACT Understanding the vulnerability of face recognition systems to malicious attacks is of critical importance. Previous works have focused on reconstructing face images that can penetrate a targeted verification system. Even in the white-box scenario, however, naively reconstructed images misrepresent the identity information, hence the attacks are easily neutralized once the face system is updated or changed. In this paper, we aim to reconstruct face images which are capable of transferring face attacks on unseen encoders. We term this problem as Face Reconstruction Transfer Attack (FRTA) and show that it can be formulated as an out-of-distribution (OOD) generalization problem. Inspired by its OOD nature, we propose to solve FRTA by Averaged Latent Search and Unsupervised Validation with pseudo target (ALSUV). To strengthen the reconstruction attack on OOD unseen encoders, ALSUV reconstructs the face by searching the latent of amortized generator StyleGAN2 through multiple latent optimization, latent optimization trajectory averaging, and unsupervised validation with a pseudo target. We demonstrate the efficacy and generalization of our method on widely used face datasets, accompanying it with extensive ablation studies and visually, qualitatively, and quantitatively analyses. The source code will be released. § INTRODUCTION With the increasing deployment of face recognition systems in security-critical environments, threat actors are developing sophisticated attack strategies over various attack points, where one of the major threats is face reconstruction attacks <cit.>. The primary goal of face reconstruction attacks is to create fake biometric images that resemble genuine ones from the stored biometric templates which are then used to bypass the system. Previous works have mostly focused solely on attacking the target (seen) encoder, i.e., using these fake biometric images to bypass the same system. However, transfer attack scenarios, where these fake biometric images are used to bypass other unseen systems (Fig. <ref>a middle) are not discussed enough. They are potentially more perilous than common attacks as they can break into a wide range of face recognition systems. Formally, we define Face Reconstruction Transfer Attacks (FRTA) as successfully reconstructing a face image that can substitute a real face image on unseen encoders, as illustrated in Fig <ref>a. To state our problem in a rigorous and tractable framework, we formulate this task to reconstruct a face image which matches the original image in identity by a finite number of unseen face encoders given only a single encoder and a feature embedded through this encoder (section <ref>). However, existing works do not consider transfer attacks in their designs thoroughly. To solve FRTA effectively, we first devise a novel out-of-distribution (OOD) oriented FRTA framework that reformulates the attack as a problem of generalization of loss function over OOD of network parameters. Our FRTA task falls to the standard OOD generalization category in that the loss function needs to be optimized by one variable and generalized with respect to the other instantiated from unseen distributions. We postulate a white-box scenario where a single encoder and a template feature embedded through this encoder are given and face reconstruction is achieved by optimizing the data input in a way to minimize feature reconstruction loss. Instead of directly updating the input image, we adopt a generative model and update its latent which outputs an image(Fig. <ref> step 1). However, reconstructing the face with naive latent optimization is likely to suffer from underfitting with poor latent optimizations (Fig. <ref>b red histogram). To address this challenge, we introduce an Averaged Latent Search with Unsupervised Validation through pseudo target (ALSUV) framework, which is motivated by the OOD generalization concept. In this OOD-oriented approach, we 1) optimize multiple latents concurrently by 2) employing latent averaging and 3) searching for the most optimal generalized sample through unsupervised validation using a validation encoder. Multiple latent optimization prevents poor optimization by selecting the well-optimized sample close to the target(Fig. <ref>b blue histogram). However, this potentially causes overfitting to the seen encoder (Fig. <ref>c coral bars). Therefore, we average latents throughout optimization trajectories which provides a flatter loss surface (Fig. <ref>) and leads to OOD generalization <cit.>. Additionally, we adopt an unsupervised validation search with pseudo target, which incorporates a surrogate validation encoder to search for the best generalizing sample in validation encoder space. Overall, latent averaging and unsupervised validation alleviate overfitting to seen encoder as shown in Fig. <ref>c (turquoise bar) where rank and performance are notably correlated. Contributions The contributions of our work are summarized as: * We address FRTA as a threat that can potentially bypass a wide range of unseen face recognition systems. We rigorously formulate the reconstruction problem as an OOD generalization problem and enhance the attack performance with a multiple latents optimization strategy. To address overfitting, we introduce the OOD-oriented ALSUV, which includes latent averaging and unsupervised validation using pseudo targets from a validation encoder. * Our method achieves state-of-the-art in face reconstruction transfer attack tested on a subset of LFW, CFP-FP, and AgeDB datasets with 6 different types of face encoders in terms of conventionally used success attack rate and top-1 identification rate where we are the first to attempt. * We comprehensively examine our method through ablation study, hyperparameter analysis, and evaluate the image quality. Also, we conduct additional experiments to analyze the effect of optimizing multiple latents, latent averaging, and unsupervised validation. § RELATED WORKS §.§ Face Reconstruction from Features NBNet <cit.> first pioneered face reconstruction from the template by neighborly deconvolution, but the results are substandard for both quality and performance. <cit.> projects features into the latent space of a pre-trained StyleGAN2 <cit.> to generate fine-grained resembling images. The results are qualitatively decent, but often contain different identities as shown in Fig. <ref>. DiBiGAN <cit.> presents a generative framework based on bijective metric learning and pairs features with face images one-to-one. These methods offer fast sample generation after training but require extensive face datasets and time for training a new network. <cit.> samples varying random Gaussian blobs iteratively and combining the blobs as the shape of a face. It requires only a few queries and no prior knowledge such as dataset, but shows results with low quality. <cit.> reconstructs faces using similarity scores based on eigenface with soft symmetry constraints, generative regularization, and multi-start policy to avoid local minimas. However, the reconstructed images show severe noise as shown in Fig. <ref>. These works consume less time to generate few samples than training a new network but takes longer for large scale generations. Methods introduced above show promising results when tested with seen encoders, but performance drastically falls with unseen encoders. Recently, a couple of works <cit.> considered FRTA scenarios in their work. <cit.> suggests a genetic algorithm-based approach along with an attack pipeline to impersonate the target user. Reconstructed images are high quality, but evolutionary algorithms rely on random mutation and selection processes which might get trapped in local optima or fail to generalize well due to the limited exploration-exploitation trade-off inherent in their design. <cit.> suggests query efficient zeroth-order gradient estimation with top k initialization search followed by ensembling. This approach typically iteratively adjusts latent representations to minimize reconstruction errors but may suffer from overfitting to specific characteristics of the seen encoder. <cit.> trains a network which maps features to the 𝒲^+ latent space of StyleGAN3 to learn the 𝒲^+ distribution space based on a WGAN framework, however, the GAN frameworks are prone to mode collapse. §.§ Out-of-Distribution Generalization Generalization is one of the most important tasks in deep learning models especially when it comes to unseen OOD circumstances. Searching for flat minima is one of the main stems of research to achieve generalization where <cit.> establishes a strong connection between the flatness of the loss surface and generalization in deep neural networks. Weight averaging<cit.> ensembles the trajectories of non-linear function parameters during training to seek well generalized flat minima point. <cit.> gathers information from separately trained models, <cit.> stochastically averages a single model with cyclic learning rate, <cit.> aggregates from a dense trajectory, and <cit.> collects from several different training policies. Our task focuses on generalizing reconstructed face over unseen encoders, hence, we adopt the core principles from these works and adequately modify them for our work. Pseudo label has been used for generalization tasks where label information is scarce such as domain adaptation<cit.>. They can be made by mixing both samples and labels<cit.>, ensembling labels from augmented samples <cit.>, or confidence prediction with sliding window voting followed by confidence-based prediction<cit.>. The key components for pseudo labels are sufficiently high confidence <cit.> and adequate regularization to prevent over-confidence<cit.>. Our method requires selecting the best generalizing input(latent) in an unsupervised manner. Hence, we migrate the idea of previous works and discover how to find the proper pseudo target for our task. § FACE RECONSTRUCTION TRANSFER ATTACK AS OOD GENERALIZATION We first clearly define the problem of FRTA, and we show that face reconstruction transfer attack is in fact an OOD generalization problem. §.§ Problem Formalization The FRTA can be formalized as an algorithm 𝒜 that must return a solution vector 𝒜(θ_seen) = x^* to the following optimization objective: max_x min_θ∈Θ sim(E_θ(x), v_θ), where v_θ indicates a true target v_θ=E_θ(x_real) corresponding to an encoder E_θ. The nature of FRTA poses a constraint that the attacker can access to the seen encoder E_θ_seen only. The encoder parameter space Θ includes both seen and unseen encoder networks exposed to attacks. Hence, the objective requires that the attack must be transferrable. One approach to the attack is by using a face image generator G. By substituting x=G(z) in the above equation, one maximizes the objective in terms of z instead of x as: max_z min_θ∈Θ sim(E_θ(G(z)), v_θ). §.§ FRTA As OOD Generalization Given that both E_θ and G are multi-layer perceptron instances, we show that the FRTA can be formulated as an OOD generalization problem. To this end, we first formalize the OOD generalization as follows: Definition. OOD generalization on the domain 𝒟 in the parameter space Θ is to solve for an algorithm 𝒜 that, given a seen dataset, returns a solution parameter 𝒜(D_seen) = θ^* that minimizes min_θmax_D ∈𝒟 L(θ; D), where L is a loss function defined as L(θ; D) = 1/| D |∑_(x,y) ∈ D l(f_θ(x), y). Here, f_θ is an MLP parametrized by θ, and l is a sample-wise loss function on a label-data pair. Theorem. Define f_z by f_z(θ) = E_θ( G(z)), and let D^*_seen={ (θ_seen, v_θ_seen) }, 𝒟^* = {{ (θ, v_θ) } : θ∈Θ}, and l(f_z(θ), v_θ) = - sim(f_z(θ), v_θ). Then, f_z is an MLP, and the FRTA algorithm 𝒜 on Θ is an OOD generalization algorithm 𝒜^* on the domain 𝒟^* in the parameter space 𝒵. The theorem is proved (in Supp. <ref>) by observing the duality between data and parameter; in MLP, data can be viewed as parameter, and vice versa. §.§ Averaged Latent Search with Unsupervised Validation with Pseudo Target(ALSUV) Inspired by the above interpretation, we tackle FRTA by means of OOD generalization techniques by defining the similarity as a loss function. Thereupon, we propose ALSUV with pseudo target, which is an integrated approach of OOD generalization on the latent. The latent search mechanism of ALSUV are decomposed as follows: (1) multiple latent optimization, (2) latent averaging throughout optimization trajectories, and (3) unsupervised validation with the pseudo target. §.§.§ Multiple Latent Optimization In order to avoid the underminimization problem shown in Fig. <ref>b and to generate candidates for our following unsupervised validation method, we initialize multiple n latent vectors and optimize them in a parallel manner: {z_i}_i=1^n min ∑_i=1^n L( z_i; E_seen )= -∑_i=1^nsim(E_seen(G(z_i)), v_seen) where E_seen = E_θ_seen v_seen = v_θ_seen. The given loss function is minimized by a gradient-based update using the standard optimizer such as Adam<cit.> or SGD. Fig. <ref> validates that updating with multiple latents significantly improves the minimization of the loss. Moreover, we find that this simple multiple latent optimization can more effectively escape from poor local minima than iterating with complicated learning rate scheduler (Tab. <ref>). §.§.§ Latent Averaging Avoiding the poor underminimization issue alone is not sufficient for effective generalization of the attack under FRTA since the acquired solution may overfit to seen encoders as shown in Fig. <ref>c. To effectively improve the attack rate on the unseen encoders, we borrow the idea from OOD generalization<cit.> and apply averaging the solution latent vectors over the optimization trajectory: z_i = 1/T_0∑_t=T-T_0^T z^(t)_i where z^(t)_i are the latent vectors acquired at step t of the optimization of Eq. (<ref>), T is the total number of optimization steps, and T_0 is the size of steps to average the latent vectors. Due to the equivalence between FRTA and OOD generalization (Thm. <ref>), latent averaging can improve the generalization of the similarity maximization under unseen encoder networks and corresponding unseen targets. Fig. <ref> evidences our hypothesis, indicating that averaging the latents improves the attack rate on the unseen encoders. Moreover, Fig. <ref> shows that latent averaging improves the transfer attack by smoothening the loss surface of our objective, validating the equivalence between OOD generalization and FRTA. §.§.§ Unsupervised Validation with Pseudo Target Despite its effectiveness, multiple latent optimizations with averaging can still suffer overfitting issues as no explicit information of unseen encoders is exposed to the optimization. Therefore, we acquire more explicit information of unseen encoders by utilizing a surrogate validation encoder E_val = E_θ_val. Particularly, we propose to validate our reconstruction criterion Eq. (<ref>) over the surrogate validation encoder. One remaining issue however is the absence of validation target vector v_val. To resolve it, we construct the pseudo target: v_val= 1/k_top∑_i=1^k_top E_θ_val(G(z_(i))), by averaging the reconstructed features from the top k latent vectors of the attack objective in Eq. (<ref>). Namely, z_(i) are ordered with respect to the similarity to the seen feature L(z_(1)) ≤…≤ L(z_(n)). The pseudo target may not be fully precise approximation of validation target v_val from the real ground truth image. However, we show that it improves the attack under FRTA by mitigating the overfitting issue of the latent optimization (Fig. <ref>c). Moreover, we find that the average of multiple top k reconstructed features better serves as an alternative of v_val than the single top 1 feature as the latter may have been overfitted to the seen target. §.§.§ Full Objective Overall, the ALSUV algorithm searches the solution latent z^* by unsupervised validation z^* = min_z_i d(E_val(G( z_i)), v_val), subject to z_i ∈ S within the search space S of multiple latent-averaged vectors defined in Eqs. (<ref>) and (<ref>) based on a pseudo target v_val defined in Eq. (<ref>). The reconstruction target feature of ALSUV avoids under-minimization of latent optimization, thereby effectively attacking the seen encoder. On the other hand, it achieves robust FRTA based on latent averaging and validation against pseudo target with the surrogate validation encoder. The full algorithm is given in Supp. <ref>. § EXPERIMENTS The experiments section includes: 1) performance evaluation against existing methods; 2) comprehensive component ablation and hyperparameter variation to show effectiveness; 3) analysis of components by varying setups, comparing parallel latent optimization to serial optimization, visualizing loss surface effects with and without latent averaging, using different validation encoders for unsupervised validation, and assessing image quality visually and quantitatively. §.§ Configuration We use StyleGAN2<cit.> trained with FFHQ-256 for the generative model, denoted as G(·), and latents are optimized in the 𝒲^+ space. Both G(·) and target encoders E_seen(·) are frozen while optimizing latents. We adopt Adam<cit.> optimizer with 100 steps where the learning rate starts from 0.1 and is divided by 10 at iteration 50. Our method involves three hyperparameters: n=100, the number of latents; t=70, length of trajectory for latent averaging; and k_top=10, the number of samples used for unsupervised validation. We use pytorch<cit.> for all experiments on a single Nvidia RTX 2080ti GPU. §.§ Datasets and Networks We use the LFW, CFP-FP, and AgeDB-30 datasets, three widely used verification datasets with distinct characteristics. For LFW and AgeDB-30, we compare reconstructed samples with every other positive sample except the one used for reconstruction, resulting in 3,166 and 3,307 pairs of comparisons, respectively. CFP-FP consists of frontal and profile images; we reconstruct only the frontal images and follow the challenging frontal-profile verification protocol, resulting in 130 pairs in total (not all sampled identities were used in the given protocol). For identification, we set up generated samples as probes and every image as the gallery, resulting in 13,233, 2,000, and 5,298 samples in the gallery, respectively, consisting of 5,749, 500, and 388 identities for LFW, CFP-FP, and AgeDB-30, respectively. For CFP-FP, we use generated frontal images for probes and profile images for the gallery, which is very challenging. We randomly sample 200 non-overlapping identities each from 3 datasets. We use encoders based on various backbones which are equipped with different classification heads and distinct datasets. Specific configurations are shown in Supp. <ref> Tab. <ref>. For the validation encoder, we use Swin-T as the default. §.§ Evaluation Metrics and Details <cit.> introduces Type I and Type II SAR where Type I compares the generated face with the ground truth target, while Type II compares with different images from the same identity. SAR measures the ratio of generated samples passing the positive verification test where thresholds are specific to type of datasets and face encoders. Since Type I is relatively easy, we only report Type II performance which is more challenging. We also evaluate identification rate, where we retrieve the top 1 sample from gallery composed of real samples and probes consisting of generated samples. We include target samples in the gallery which is still challenging. In ablation, we also report the SAR@FAR which is the success attack rate at thresholds of FAR(=1e-4, 1e-3, 1-e2). §.§ Comparison with Previous Works We compare our method with state-of-the-art feature-based face reconstruction methods including NBNet <cit.>, LatentMap <cit.>, Genetic <cit.>, GaussBlob <cit.>, Eigenface <cit.>, FaceTI <cit.>, and QEZOGE <cit.>. For FaceTI <cit.>, we used StyleGAN2 and our face encoders for reproduction. As shown in Tab. <ref> and Tab. <ref>, overall previous methods are effective on seen encoders, but the performance drastically drops on unseen encoders. EigenFace <cit.>, FaceTI <cit.>, and QEZOGE <cit.> show better performance compared with other works, however, tend to show lower performance compared with our method and the results highly fluctuates depending on the type of seen encoder. In contrast, our method outperforms for both seen and unseen cases. Our SAR and identification rate results are close to real face images on seen encoders while outperforming previous works with a large margin on unseen encoders for every dataset while depending less on the type of seen encoder. Additionally, results tested on unseen encoders shown in Tab. <ref> and Tab. <ref> present that our method achieves OOD generalization on unseen encoders successfully. §.§ Analysis §.§.§ Ablation of Components We analyze the effect of each component of our method on the overall performance. In Tab. <ref>, we present results for n=1, 20, 50, 100 with and without latent averaging and unsupervised validation. We use k_top=10 and t=70 for default when applied. In addition to SAR, we report SAR@FAR(=1e-4, 1e-3, 1-e2) scores which are similar to the concept of TAR@FAR widely used in T/F evaluations which facilitates a more thorough analysis of our work. Results in Tab. <ref> reveal that using more initial samples and applying ALSUV yields the best performance. Compared to optimizing a single latent without ALSUV, our method increases SAR by 46.14% for LFW, 12.23% for CFP-FP, and 22.62% for AgeDB-30 on average. §.§.§ Analysis of Hyperparameters In this section, we investigate the effects of each hyperparameter related to our method. Our method has 3 parameters: number of latents n, size of latent average t, and number of top k samples for unsupervised validation k_top. We evaluate results by controlling these hyperparameters tested on unseen encoders on LFW dataset. First of all, we consider 1 ≤ n ≤ 100 with (blue) and without (red) ALSUV. As shown in Fig. <ref>a, the n plays a crucial role as SAR increases from 48.66% to 92.06% (red line). SAR increases remarkably, especially within the range of 1≤ n≤ 10, and starts plateauing from 20≤ n. In addition, we can observe the evident influence of latent averaging and unsupervised validation from the gap between the two lines. Finally, improvement starts plateauing from n ≥ 20 as shown in Fig. <ref>a and Tab. <ref>. Therefore, we suggest n=20 as a cost-efficient trade-off point between high performance and computation. Secondly, we compare the effect of the size of latent average t for t∈ {1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100} where we fix n=100 and test with and without unsupervised validation. As shown in Fig. <ref>b, the overall performance is highest in the interval 70≤ t≤90. Compared with t=1 which is without latent averaging, latent averaging improves the performance from 92.06% to 93.46% without unsupervised validation, and 94.24% to 95.13% with unsupervised validation. Interestingly, we found that any size of latent averaging benefits when unsupervised validation is not applied(red line). Finally, we investigate the effect of the number of top k samples k_top for unsupervised validation. With n=100, we vary k_top∈ {1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100} with latent average t=70(blue line) and without latent average(red dashed line). Even without latent averaging, unsupervised validation increases performance from 92.06% to 94.24%. Applying latent averaging increases even more to 95.13%. As shown in Fig. <ref>c, the best range of k_top is 10 ≤ k_top≤ 30 where the optimal value is k_top=30 and practical value is k_top=10. Overall, we suggest k_top as 10% to 30% of the number of latents n. §.§.§ Number of Latents and Optimization Steps We compare optimizing 100 latents optimized 100 steps each (denoted as 100 × 100, without latent average and unsupervised validation) and a single latent optimized 10,000 steps with cyclical learning rate<cit.>. We train on LFW dataset where results are shown in Tab. <ref>. Our method outperforms with 95.13% in average SAR while serial optimization shows 60.28% resulting in a 34.85% performance gap. Despite the total steps of optimizations being identical, results highlight the importance of using multiple latents which prevents falling into poor local minima than interacting with complex learning rate scheduler as the difference of average cosine similarity shown in the last row of Tab. <ref> is significant. §.§.§ Latent Averaging and Loss Surface In this section, we thoroughly analyze the effect of latent averaging. First of all, we visualize the loss surface of a sample in the seen encoder by adding perturbation in two random axes and using the method suggested in <cit.>. Additionally, we quantitatively compare the flatness of the optima point by measuring the first eigenvalue and trace of the Hessian matrix for every sample in LFW and the generalization by loss value on unseen encoders. Fig. <ref>a shows the shape of the loss surface where the z-axis is the loss value 1-sim(·) where 1 is added to set the minimum loss value to 0. Applying latent average significantly improves the flatness visually as shown. In addition, the statistics of the first eigenvalue and trace of the Hessian matrix of latents with latent averaging have much lower values which indicate the curvature at the minima point is flatter while the loss value for unseen encoders is lower. §.§.§ Unsupervised Validation with Pseudo Target Unsupervised validation utilizes the feature space of a surrogate validation encoder to search for a better generalizing sample instead of only using the seen encoder. Hence, we compare the distance between the validation encoder's top 1 and seen encoder's top 1 against the ground truth feature in validation space (Fig. <ref>a) and examine whether this is relevant to generalizing to unseen encoder space (Fig. <ref>b). We also measure the distance between pseudo targets against ground truth in validation space to verify its efficacy as a target. We use LFW dataset and examine all 6 encoders as seen and unseen between each other as Tab. <ref> with Swin-T as validation encoder and acquire the statistics of cosine distance. As shown in Fig. <ref>a the pseudo target is closest to the ground truth feature from real images with average cosine distance 0.473 followed by our unsupervised validation's top 1 with 0.386 and seen encoder's top1 with 0.309 in the validation space. According to this result, the pseudo target which is the closest to ground truth might be the best option, but unfortunately, the corresponding latent is inaccessible which is why we use the sample closest to the pseudo target. This aspect is connected to the unseen encoder space in Fig. <ref>b where our method shows average cosine distance 0.29 higher than seen encoder's top 1 0.241. Furthermore, we examine results by changing the type of validation encoder. We use each encoder used in our work as a validation encoder and examine the performance improvement only for unseen cases where seen, unseen, and validation encoders do not overlap. Results shown in Fig. <ref>c presents that using any type of validation encoder improves performance and the improvement shows positive correlation with the general performance of validation encoder. More results are shown in Supp. <ref> Tab. <ref>. §.§.§ Image Quality We conduct qualitative and quantitative analyses of generated images. For quality evaluation, we compare a few samples from LFW and AgeDB in Fig. <ref> and CFP-FP in Supp. <ref> Fig. <ref>. For quantitative evaluation, we adopt face-specific quality metrics SER-FIQ <cit.> and CR-FIQA <cit.>. NBNet, GaussBlob, and EigenFace show poor image quality visually with artifacts and quantitatively low image quality metrics. Meanwhile, despite the decent quality, LatentMap, Genetic, and FaceTI present wrong identities compared with the target images. On the other hand, QEZOGE and our method show decent image quality visually, quantitatively, and content-wise as shown in Tab. <ref>. § CONCLUSION In this paper, we have presented a framework for face reconstruction transfer attacks. We devised our method inspired by out-of-distribution generalization to generalize our generated sample to unseen face encoders and propose ALSUV. ALSUV is instantiated by combining multiple latent optimization, latent averaging, and unsupervised validation with the pseudo target. We demonstrate that our approach surpasses previous methods in FRTA by showing high SAR and identification rate across various unseen face encoders. Our thorough analysis shows the effectiveness of our method inspired by OOD generalization. Furthermore, we hope our work alerts the security risk posed by FRTA, and emphasizes the awareness to mitigate potential threats. splncs04 § PROOF TO THEOREM Theorem. Define f_z by f_z(θ) = E_θ( G(z)), and let D^*_seen={ (θ_seen, v_θ_seen) }, 𝒟^* = {{ (θ, v_θ) } : θ∈Θ}, and l(f_z(θ), v_θ) = - sim(f_z(θ), v_θ). Then, f_z is an MLP, and the FRTA algorithm 𝒜 on Θ is an OOD generalization algorithm 𝒜^* on the domain 𝒟^* in the parameter space 𝒵. (MLP) Since MLP is a composition of MLPs, it suffices to prove that a layer of f_z is MLP. To this end, we show that the MLP σ ( 𝐖𝐱 + 𝐛) with input 𝐱 and parameters (𝐖, 𝐛) is an MLP with input (𝐖, 𝐛) and parameters 𝐱. Let 𝐰_i and b_i denote the i-th row of 𝐖 and 𝐛, respectively, for i=1,…, r where r is the row dimension of 𝐖. Observe, σ ( 𝐖𝐱 + 𝐛) = σ( diag(𝐱) 𝐰_i + b_i 1) where diag(𝐱) is the diagonal matrix whose diagonal elements are x_i, and 1 is a vector whose all entries are 1. Both f_1((𝐖, 𝐛); 𝐱) = diag(𝐱) 𝐰_i and f_2((𝐖, 𝐛); 𝐱) = b_i 1 are MLPs with input (𝐖, 𝐛) and parameters 𝐱, hence their sum and activattion are also MLPs with the same aspect, completing the proof. (Equivalence) We show the equivalence between FRTA and OOD generalization. To see this, first define L(z; D^*) := 1/|D^*|∑_(θ, v_θ) ∈ D^* l(f_z(θ), v_θ) Then, observe that 𝒜^* (D^*_seen) := min_z max_D^* ∈𝒟^* L(z; D^*) = min_z max_D^* ∈𝒟^* l(f_z(θ), v_θ) = min_z max_D^* ∈𝒟^* - sim (f_z(θ), v_θ) = max_z min_θ∈Θ sim (f_z(θ), v_θ) =: 𝒜(θ_seen) where the second and fourth equations hold due to D^* = { (θ, v_θ) }, completing the proof. § SUPPLEMENTARY TO METHOD §.§ Algorithm of the full method The full algorithm of our method is given in Algorithm. <ref>. In this algorithm, [z_i]_i=1^n is the vector concatenation of the vectors z_i, which is to parallelize the update of z_i's. § SUPPLEMENTARY RESULTS Fig. <ref> shows the result of reconstructed images from CFP-FP dataset of each baselines, our method and ground truth images. Tab. <ref> ablates pseudo target of unsupervised validation by using different types of target for searching the top 1 reconstructed sample. We compare SAR of 3 different cases: using the seen feature vector as target in the seen encoder space, using validation encoder and the pseudo target in the validation encoder space, and using the unseen encoder and the feature vector from real image in the unseen encoder space where the last works as a reference to upper bound performance. § SUPPLEMENTARY SETUP Tab. <ref> shows the configuration of each face encoders used in all experiments.
http://arxiv.org/abs/2407.02272v1
20240702140159
Aligning Human Motion Generation with Human Perceptions
[ "Haoru Wang", "Wentao Zhu", "Luyi Miao", "Yishu Xu", "Feng Gao", "Qi Tian", "Yizhou Wang" ]
cs.CV
[ "cs.CV", "cs.GR" ]
^†Lead Authors. Learning Paradigms and Modelling Methodologies for Digital Twins in Process Industry Michael Mayr1 Georgios C. Chasparis1 Josef Küng2 July 8, 2024 ==================================================================================== § ABSTRACT Human motion generation is a critical task with a wide range of applications. Achieving high realism in generated motions requires naturalness, smoothness, and plausibility. Despite rapid advancements in the field, current generation methods often fall short of these goals. Furthermore, existing evaluation metrics typically rely on ground-truth-based errors, simple heuristics, or distribution distances, which do not align well with human perceptions of motion quality. In this work, we propose a data-driven approach to bridge this gap by introducing a large-scale human perceptual evaluation dataset, , and a human motion critic model, , that capture human perceptual preferences. Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline to enhance generation quality. Extensive experiments demonstrate the effectiveness of our approach in both evaluating and improving the quality of generated human motions by aligning with human perceptions. Code and data are publicly available at <https://motioncritic.github.io/>. § INTRODUCTION Human motion generation is an important emerging task <cit.> with wide-ranging applications, including augmented and virtual reality (AR/VR) <cit.>, human-robot interaction <cit.>, and digital humans <cit.>. Achieving high realism in generated human motions is crucial, necessitating naturalness, smoothness, and plausibility. However, current generation methods still fall short of these goals, often producing subpar results. Meanwhile, designing appropriate evaluation metrics that accurately reflect these qualities remains a significant challenge. This complexity stems from the highly non-linear and articulated nature of human motion, which must adhere to physical and bio-mechanical constraints while also avoiding visual artifacts. Effective metrics would not only facilitate the objective comparison of generated results but also have the potential to enhance generation models by addressing their shortcomings. Existing evaluation metrics typically rely on error with pairing ground truth (GT) motion, simple heuristics, or on distribution distance with real motion manifold. The error-based metrics cannot fully reflect the performance because GT is only one reasonable possibility. The heuristics fall short in comprehensively representing motion quality. For instance, foot-ground contact metrics <cit.> fail to penalize twisting arm motions that violate bio-mechanical constraints. It is also infeasible to manually define all the human motion rules in a handcrafted manner. Meanwhile, distribution distance metrics like Fréchet Inception Distance (FID) <cit.> do not operate on an instance level but rather assess overall distribution similarity. Consequently, they cannot identify implausible motions or provide direct supervision signals to guide the generation of higher-quality motions. Some studies <cit.> also indicate that FID correlates poorly with user studies due to the misalignment between its distance measurement and human perception of motion quality. Consequently, existing automatic evaluation metrics cannot effectively reflect or replace subjective user studies, hindering objective evaluation and comparison. In light of this, we advocate the need for automatic evaluation aligned with human perceptions. Firstly, humans are the primary audience and interaction partners for motion generation, making their perception crucial for evaluating motion quality. Secondly, the human brain possesses specialized neural mechanisms for processing biological motion <cit.> and is sensitive to even slightly unnatural motions <cit.>. Therefore, we explore the possibility of directly learning perceptual evaluations from humans using a data-driven approach. This method could bridge the gap between objective metrics and subjective human judgments, providing a more accurate assessment of motion quality. First, we carefully curate a human perceptual evaluation dataset named , which contains 52590 pairs of human preference annotations on generated motions. Next, we train a human motion critic model, , that learns motion quality ratings from the collected dataset. Our critic model significantly outperforms previous metrics in terms of alignment with human perceptions. Notably, it generalizes well across different data distributions. In addition to motion evaluation, we further propose to utilize the critic model as a direct supervision signal. We demonstrate that can be seamlessly integrated into the generation training pipeline, effectively improving motion generation quality by increasing alignment with human perceptions with few steps of finetuning. We summarize our contributions as follows: 1) We contribute , a large-scale motion perceptual evaluation dataset with manual annotations. 2) We develop which models human perceptions of motions through a data-driven approach. Extensive experiments demonstrate its superiority as an automatic human-aligned metric of motion quality. 3) We show that the proposed motion critic model could effectively serve as a supervision signal to enhance motion generation quality. Remarkably, it requires only a small number of fine-tuning steps and can be easily integrated into existing generator training pipeline in a plug-and-play manner. § RELATED WORK §.§ Human Motion Generation Human motion generation is a pivotal task in computer vision, computer graphics, and artificial intelligence, aiming to produce natural and realistic human pose sequences <cit.>. This field has seen substantial advancements with the rise of deep generative models <cit.>. Previous works have explored text-conditioned motion generation that transform narrative descriptions into coherent pose sequences <cit.>, audio-conditioned methods that synchronize movements with rhythmic cues <cit.>, and scene-conditioned generation that integrates environmental contexts to produce contextually appropriate motions <cit.>. Despite significant progress, current mainstream data-driven kinematic motion generation methods sometimes produce unnatural motions that are jittery, distorted, or violate physiological and physical constraints. These issues could be attributed to the inherent uncertainty of the task, limitations of supervision signals, and dataset noises. Furthermore, evaluating generated human motions presents additional challenges. Conventional metrics such as error and FID fall short in capturing the nuanced details essential for producing lifelike and visually appealing movements <cit.>. These measures can overlook critical aspects like the fluidity and biomechanical plausibility that are fundamental to human perceptual judgments. Given these challenges, it is imperative to develop metrics that are more closely aligned with human perception to more accurately evaluate and enhance the motion generation results. §.§ Human Perception Modeling Pioneer work <cit.> collect human perceptual similarity dataset and propose to utilize distance in deep features as perceptual metrics. Some works <cit.> in language models to explore aligning model performance with human intent by first training a reward model, then performing reinforcement learning with the reward model. Recent works  <cit.> also explore utilizing human feedback to improve visual generation results. For example, ImageReward <cit.> propose a reward feedback learning method (ReFL) to to align text-to-image generative models with human judgements. In human motion generation, however, few studies have explored modeling human feedbacks, even though the generated motion quality is highly relevant to human perceptions. One recent work, MoBERT <cit.>, constructs a dataset of human ratings for generated motions. Our work differs from MoBERT in that we collect real human data on a scale tens of times larger (52.6K vs 1.4K) and use comparisons instead of ratings, which is more robust. We design the critic model to learn ratings from these comparisons automatically. Additionally, our approach could not only evaluate motion quality but also effectively improve motion generation results. § : A LARGE-SCALE DATASET OF MOTION PERCEPTUAL EVALUATION We build to capture real-human perceptual evaluations with large-scale and diverse human motion sequences. Hence, we implement a rigorous and efficient pipeline for data collection and data annotation. We also design a concensus experiment in order to examine the perceptual consistency across various human subjects. §.§ Motion Data Collection We first collect generated human motion sequence pairs for subsequent perceptual evaluation. We utilize state-of-the-art diffusion-based motion generation method MDM <cit.> and FLAME <cit.> to generate human motion sequences parameterized by SMPL <cit.>. For MDM <cit.>, we utilize the action-to-motion model trained on HumanAct12 <cit.> and UESTC <cit.> respectively. For FLAME <cit.>, we utilize the text-to-motion model trained on HumanML3D <cit.>. For each group of 4 motion sequences to be annotated, we use the same condition (text prompt or action labels) while sampling different random noises. This makes the motions similar in content while still having distinguishable differences, thereby making it easier to annotate the choices. §.§ Human Perceptual Evaluation Human perceptual evaluation is the core component of , therefore we implement a rigorous pipeline to ensure annotation quality. We first introduce the question design of the perceptual evaluation, then describe the protocol for conducting the evaluation. Finally, we present a statistical analysis of the evaluation results. §.§.§ Question Design Our perceptual evaluation is designed in the form of multiple-choice questions as selection is generally easier and more robust than directly rating <cit.>. Given a group of four motion sequence options, we instruct the annotators to select the best candidate that is most natural, visually pleasing, and free of artifacts. Specifically, we summarize the typical failure modes of the generated motions (, jittering, foot skating, limb distortion, penetration, ) and explicitly require the annotators to exclude these options. We provide detailed guidance with task descriptions and representative video examples to better communicate the goal to the annotators. The full guidance is presented in the supplementary materials. While the optimal choice can be decided unambiguously in most cases, there are situations where the decision can be challenging. Therefore, we add two additional options, “all good” and “all bad”, so that the annotator is not required to pick one of the motions in these cases, thereby improving overall annotation quality. Results indicate that these cases account for a small portion of the total data. We exclude these cases from our subsequent experiments. In total, we set six options for each entry: four motion candidates plus “all good” and “all bad”. §.§.§ Protocols To ensure the quality of perceptual evaluation results, our annotation process consists of annotator training, annotation, and quality control. We recruit 10 annotators to perform the perceptual evaluation. Before the evaluation begins, we provide annotation guidelines to help the annotators understand the task and maintain consistent criteria. The annotators must pass a pilot test before starting the formal annotation to ensure they correctly understand the annotation requirements. Additionally, we conduct a perceptual consensus experiment to assess whether the annotation pipeline is suitable for our dataset, as discussed in <Ref>. Finally, we implement a quality control process where the annotated data is reviewed by an expert quality inspector. During the annotation process, we continuously monitor the quality of each batch of data. For each batch, we randomly sample 10% of the data for quality inspection. The consistency between the sampled data and the expert's annotations must exceed 90%; otherwise, the entire batch will be re-annotated. §.§ Analysis In total, we collect annotations for 18260 multiple-choice questions covering 73040 unique motions, significantly surpassing previous work <cit.> (1400 motions). We further investigate the following two questions: * Based on our experimental setup, can the subjects confidently select the suitable options from the choices provided? * Is there a significant difference in perceptual preferences among different subjects, or are they well-aligned? For the first question, we calculate the proportion of cases where a choice could not be made (including “all good” and “all bad”), and find a total of 418 such groups (2.29%). The result indicates that most of the time subjects can make a definite judgment, demonstrating the validity of our protocol design. For the second question, we conduct a perceptual consensus experiment where all 10 subjects perform perceptual evaluation independently on 312 groups of randomly selected data. We calculate their pairwise and overall consistency in choices. <Ref> show that for most questions (82.37%), all 10 subjects make the unanimous decision. <Ref> reveals that all 10 subjects exhibit high pairwise agreement (90%). These results indicate a high level of consistency in perceptual judgments of human motion among different human subjects. This not only validates the rationality of our perceptual evaluation pipeline but also inspires us to train machine learning models to emulate this consistent judgment capability. § : ADVANCING MOTION GENERATION WITH PERCEPTUAL ALIGNMENT Based on , we develop a human motion critic model, , to emulate the perceptual judgment capabilities of human subjects regarding human motion. We first present the problem formulation and training approach of the critic model, and then explain how to use the critic model for optimizing motion generation. §.§ Problem Formulation We formulate the problem as follows: given an input human motion sequence 𝐱, we assume there is an implicit human perception model ℋ that rates the motion quality ℋ(𝐱), where a higher rate indicates better quality. We aim to build a computational critic model 𝒞 that best aligns with ℋ. Since ℋ is not explicitly available, we take a data-driven approach. We obtain the human perceptual evaluation dataset 𝒟 containing multiple pairs of samples (𝐱^(i), 𝐱^(j)). Our training objective is to train the model 𝒞 using the dataset 𝒟 so that it approximates the human perception model ℋ as closely as possible. Specifically, we want the model prediction 𝒞(𝐱^(i)) > 𝒞(𝐱^(j)) if and only if ℋ(𝐱^(i)) > ℋ(𝐱^(j)). Based on the Bradley-Terry model <cit.>, the overall training objective could be written as maximizing the joint probabilities that the model 𝒞 makes judgments consistent with ℋ for each pair of samples in the dataset 𝒟: max_𝒞 𝔼_(𝐱^(i), 𝐱^(j)) ∼𝒟[ logσ((𝒞(𝐱^(i)) - 𝒞(𝐱^(j))) · (ℋ(𝐱^(i)) - ℋ(𝐱^(j)))) ], where σ is the sigmoid function. §.§ Human Motion Critic Model In practice, we represent human motion by 𝐱∈ℝ ^ L × J × D where L denotes the sequence length, J denotes the number of body joints, and D denotes parameter dimensions. We implement the critic model 𝒞 as a neural network that maps the high-dimensional motion parameters to a scalar s. We draw pairwise comparison annotations from the collected dataset, where 𝐱^(h) is the better instance and 𝐱^(l) is the worse. The perceptual alignment loss is thus given by: ℒ_Percept = - 𝔼_(𝐱^(h), 𝐱^(l)) ∼𝒟[ logσ( 𝒞(𝐱^(h)) - 𝒞(𝐱^(l)) ) ]. §.§ Motion Generation with Critic Model Supervision Additionally, we explore to utilize the learned human perceptual prior of 𝒞 not only for evaluating generated motions, but also improving them. We demonstrate that our motion critic model could be integrated into state-of-the-art diffusion-based motion generation approaches with ease by using MDM <cit.> as an example. The forward diffusion is modeled as a Markov noising process {𝐱_t}_t=0^T where 𝐱_0 is drawn from the data distribution, and q(𝐱_t | 𝐱_t-1) = 𝒩(√(α_t)𝐱_t-1,(1-α_t)I), where α_t∈ (0,1) are constant hyper-parameters. When α_t is small enough, it's reasonable to approximate 𝐱_T ∼𝒩(0,I), allow sampling 𝐱_T from random noise to begin our denoising process. r0pt [H]0.62 Given an MDM model ℳ with pre-trained parameters θ_0, we fine-tune to improve its alignment with a pre-trained critic model 𝒞. We develop a lightweight perceptual-aligned fine-tuning approach based on ReFL <cit.>. Notably, in order to utilize the critic model in a plug-and-play manner, we keep the MDM training step and objective ℒ_MDM unchanged. Instead, we simply add one optimization step with critic model supervision in each training iteration as shown in <Ref>. Specifically, we sample a Gaussian noise 𝐱_T and inference until 𝐱_t where t ∈ [T_1, T_2] is randomly selected in later denoising steps. Then, a single-step denoising is performed to predict 𝐱_0’ from 𝐱_t. Based on the predicted motion 𝐱_0’, we compute the critic score s = 𝒞(𝐱_0’), which is used to compute the motion critic loss: ℒ_Critic = 𝔼_y_i ∼𝒴[ϕ(𝒞(𝐱_0')], where ϕ(s) = -σ(τ - s)) is a critic-to-loss mapping function, τ is a constant for shifting the critic value, σ is the sigmoid function. We further introduce a Kullback-Leibler (KL) divergence regularization to prevent ℳ from moving substantially away from the conditional motion generation task: ℒ_KL = 𝔼_y_i ∼𝒴[ D_KL(p(𝐱_0') p(𝐱_0')) ]. The overall fine-tuning loss is given by ℒ_FT = ℒ_MDM + λℒ_Critic + μℒ_KL. where λ and μ are constants for loss balancing. The detailed algorithm workflow is shown in <Ref>. § EXPERIMENT §.§ Implementation Details Critic Model. We train our critic model using the MDM subset in . We convert each multiple-choice question into three ordered preference pairs, which results in 46761 pairs for training and 5829 pairs for testing. We parameterize motion sequences with SMPL <cit.>, including 24 axis-angle rotations, and global root translation. We implement the critic model with DSTformer <cit.> backbone with 3 layers and 8 attention heads. We apply temporal average pooling on encoded motion embeddings followed by an MLP with a hidden layer of 1024 channels to predict a single scalar score. We train the critic model for 150 epochs with a batch size of 64 and a learning rate starting at 2e-3, decreasing with a 0.995 exponential learning rate decay. Fine-tuning. We use MDM <cit.> model trained on HumanAct12 <cit.> as our baseline, which utilizes 1000 DDPM denoising steps. We load the checkpoint trained for 350000 iterations and fine-tune for 800 iterations, with a batch size of 64 and learning rate 1e-5. We fine-tune with critic clipping threshold τ=12.0, critic re-weight scale λ=1e-3, and KL loss re-weight scale μ=1.0. We set the step sampling range [T_1, T_2] = [700,900]. §.§ as Motion Quality Metric We first evaluate whether the proposed critic model could serve as an effective motion quality metric. Specifically, we are interested in the following research questions: * How does align with human perceptual evaluations? * Could generalize to different data distributions? To investigate the first question, we evaluate the performance of our critic model on a held-out test set and compare it with existing motion quality metrics as follows: * Error-based metrics, including Root Average Error (Root AVE), Root Absolute Error (Root AE), Joint Average Error (Joint AVE), and Joint Absolute Error (Joint AE). These metrics involve directly computing the distance between the generated motion and a pairing GT with the same condition. * Heuristic metrics, including acceleration <cit.>, Person-Ground Contact <cit.>, Foot-Floor Penetration <cit.>, and Physical Foot Contact (PFC) <cit.>. These metrics does not compare against GT; instead, they implement intuitive rule-based evluations. For example, PFC models the relationship between center of mass acceleration and foot-ground contact. * Learning-based metrics. Prior work MoBERT <cit.> proposes to evaluate motion quality with a motion feature extractor and SVR Regression. Note that distribution-based metrics (FID) could not compare quality of individual motion sequences, and the comparison can be found in subsequent experiments. For each metric, we calculate the percentage they align with GT annotations (accuracy) and also their probabilistic distribution distance with GT annotations (log loss). We use the softmax function to convert the scores to probabilities (taking the opposite before softmax for metrics where smaller is better). <Ref> demonstrates that our critic model significantly outperforms previous metrics. These results not only validate the effectiveness of learning from large-scale human perceptual evaluations but also prove that our critic model can serve as a more comprehensive and robust metric for assessing motion quality. Furthermore, to investigate the second question, we test the critic model on data outside of the training distributions. We collect a standalone test set with a different motion generation algorithm, FLAME <cit.>, and perform perceptual evaluation with a different human subject. Note that this model is trained on a different dataset <cit.> with the model used to generate critic model training data, which means the action categories have large variations. The results in <Ref> further shows that our critic model could well generalize to the new test set, indicating its efficacy in evaluating different generation algorithms and unseen motion contents. Additionally, we test the generalization of our critic model on the real GT motion distribution. <Ref> illustrates the critic score distribution of HumanAct12 <cit.> test set. We group the 1190 GT motions into 5 groups based on their critic scores, evenly distributed from highest to lowest. We compare the average critic score between the groups with distribution-based metric FID and user study. The user study is conducted by comparing motion pairs sampled from each groups and then computing Elo rating <cit.> for each group. <Ref> clearly indicates that the critic score aligns well with human preferences, while FID does not. Notably, we discover that the outliers with small critic values (group V) are indeed artifacts within the dataset. Please refer to the supplementary materials for video results. The results indicate that our critic model can also generalize to the GT motion manifold, even though the model has never been trained on it. It also highlights the potential of using our critic model as a tool for dataset diagnosis (, discover failure modes). §.§ as Training Supervision Furthermore, we investigate whether our critic model can also serve as an effective supervision signal. Specifically, we fine-tune a pre-trained motion generator <cit.> with the proposed framework, and evaluate on HumanAct12 <cit.> test set every 200 steps. Additionally, we conduct a standalone user study by comparing motion pairs generated at different fine-tuning steps and compute the Elo Rating <cit.>. <Ref> reveals that as fine-tuning progresses, the motion quality consistently improves according to the user study, in line with the training objective of increasing the critic score. We also present a visualization comparison in <Ref>. We discover that as fine-tuning progresses, unreasonable human motions such as jittering, twisting, and floating significantly decrease. Please refer to the supplementary materials for video comparisons. The results also demonstrate that our fine-tuning process requires only hundreds of iterations to take effect, significantly improving the perceptual quality of the model. Compared to the 350K pre-training steps, this accounts for only 0.23% of the training cost. This further demonstrates the advantages of our proposed framework in using a perceptually-aligned critic model to fine-tune the motion generation model, not only improving quality but also being lightweight and efficient. § CONCLUSION In conclusion, our work bridges the important gap in human motion generation between objective metrics and human perceptual evaluations by introducing a data-driven framework with and . This paradigm not only offers a more comprehensive metrics of motion quality but could also improve the generation results by aligning with human preferences. We hope this work could contribute to more objective evaluations of motion generation methods and results. One limitation of our approach is its primary focus on perceptual metrics without explicitly simulating biomechanical plausibility, which could be explored in future work. Future research could also investigate more fine-grained perceptual evaluation methods to obtain rich human feedback on motion quality like <cit.>. ieee_fullname PART: Appendix § DETAILS ON §.§ Prompt Selection We utilize the prompts from HumanAct12 <cit.>, UESTC <cit.> and HumanML3D <cit.> for generating the motion candidates. Specifically, we use the 12 action labels from HumanAct12 <cit.> (shown in  <Ref>) and the 40 categories of aerobic exercise description from UESTC <cit.> (shown in <Ref>) for the MDM <cit.> model. We randomly select texts from HumanML3D <cit.> test set as prompts for the FLAME <cit.> model. §.§ Annotation Management We recruit 10 annotators for this task, and data entries are randomly allocated to them. We provide detailed guidelines to annotators. We evaluate the annotation result by spot check. We randomly select 10% of all data to inspect the annotation results according to guidelines and calculate the proportion of unqualified data entries. If the unqualified proportion is less than 10%, the results are considered to be acceptable. All the unqualified data entries will be re-annotated. We will update the guidelines during annotation based on spot check feedback, and annotators will study the new guidelines. §.§ Annotation Design We generate four motions from the same prompt for each data entry, as shown in Fig <ref>. The prompts are hidden during the annotation process. Annotators are required to select either the best or the worst motion for data entries generated by MDM <cit.> and FLAME <cit.>. MDM <cit.> exhibits better motion diversity but lacks stability, so annotators are instructed to select the best motion. Conversely, FLAME <cit.> demonstrates better stability but lacks diversity, so annotators are instructed to select the worst motion for these entries. §.§ Annotation Guidance Documentation We provide a detailed annotation document to explain the annotation process. The annotation platform is shown in Fig <ref>. Introduction Each data entry to be annotated consists of four videos, as shown in Fig <ref>. Each video is approximately three seconds long, with all four videos playing simultaneously and concatenated into one video. RequirementsEach set of videos has six options: A, B, C, D, "all are good," and "all are bad." Annotators should select the most natural and reasonable video for each data entry. If one option stands out as the best, select that option. If all actions seem equally good or equally bad, choose "all are good" or "all are bad." Text prompts will be hidden during annotation. Video Examples We provide annotators with examples if what kinds of motions are unnatural and unaccepetable: * Body pose is unnatural, including hands, feet and so on. * Human motion violates physiological constraints. * Human motion is erratic or severely stutters. * Human body collides, such as hands fully embedded into leg. * Human body is severely tilted, to the point of losing balance. * Human body appears to be drifting instead of walking. Examples of these problems are shown in Fig <ref>. § DATA DOCUMENTATION We follow the datasheet proposed in <cit.> for documenting our : * Motivation * For what purpose was the dataset created? This dataset was created to collect human perceptual data on whetner human motions seem natural, and ultimately advance our study of perceptual-aligned metric and finetuning human motion generation model. * Who created the dataset and on behalf of which entity? This dataset was created by Haoru Wang, Yishu Xu, Luyi Miao, Wentao Zhu, Feng Gao and Yizhou Wang with Peking University. * Who funded the creation of the dataset? The creation of this dataset was funded by Peking University. * Any other Comments? None. * Composition * What do the instances that comprise the dataset represent? Each instance contains 4 video of human motions generated from the same prompt by existing motion generation methods <cit.>. * How many instances are there in total? In total, we collect annotations for 18260 multiple-choice questions covering 73K unique motions. * Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? No, this is a brand-new dataset. * What data does each instance consist of? See <ref> for details. * Is there a label or target associated with each instance? Yes. See <ref>. * Is any information missing from individual instances? No. * Are relationships between individual instances made explicit? Yes. * Are there recommended data splits? Yes, we have separated the whole dataset into MDM-A (motions generated by MDM <cit.> from prompts in HumanAct12 <cit.>), MDM-U (motions generated by MDM <cit.> from prompts in UESTC <cit.> and FLAME (motions generated by FLAME <cit.> from prompts in HumanML3D <cit.>). We provide the recommended data splits by combining MDM-A and MDM-U and randomly splitting them into a training set and a test set at a ratio of 8:1. Data generated by FLAME <cit.> is primarily used as test data for generalization. * Are there any errors, sources of noise, or redundancies in the dataset? No. * Is the dataset self-contained, or does it link to or otherwise rely on external resources (, websites, tweets, other datasets)? The dataset is self-contained. * Does the dataset contain data that might be considered confidential (, data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)? No. * Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No. * Does the dataset relate to people? Yes. Our human motion data is generated as body model parameters <cit.>, not from real people, and therefore does not contain biometrics. These data are annotated by human annotators. * Does the dataset identify any subpopulations (, by age, gender)? No. Our human motion data are generated as body model parameters <cit.> with no explicit gender or age. * Is it possible to identify individuals (, one or more natural persons), either directly or indirectly (, in combination with other data) from the dataset? No. Our human motion data are generated by algorithms with commonly used body models. * Does the dataset contain data that might be considered sensitive in any way (, data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? No. * Any other comments? None. * Collection Process * How was the data associated with each instance acquired? See <ref> for details. * What mechanisms or procedures were used to collect the data (, hardware apparatus or sensor, manual human curation, software program, software API)? We use existing motion generation models to collect videos and require annotators to label them. See <ref> for details. * If the dataset is a sample from a larger set, what was the sampling strategy (, deterministic, probabilistic with specific sampling probabilities)? See <ref> and <ref> for details. * Who was involved in the data collection process (, students, crowdworkers, contractors) and how were they compensated (, how much were crowdworkers paid)? The video data was collected by the authors. The annotations were performed by the workers in DATATANG TECHNOLOGY INC., and the workers were offered a fair wage as per the prearranged contract. See <ref> and <ref> for details. * Over what timeframe was the data collected? The data were collected from 2023 to 2024, and labeled in 2024. * Were any ethical review processes conducted (, by an institutional review board)? No. The dataset raises no ethical concerns regarding the privacy information of human subjects. * Does the dataset relate to people? Yes. Our human motion data are generated as body model parameters <cit.>, not real people. The annotation is done by people. * Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (, websites)? We obtain raw data from motion generation model. Annotation data are collected by annotators. * Were the individuals in question notified about the data collection? Yes. * Did the individuals in question consent to the collection and use of their data? Yes. * If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? Yes. * Has an analysis of the potential impact of the dataset and its use on data subjects (, a data protection impact analysis) been conducted? Not applicable. * Any other comments? None. * Preprocessing, Cleaning and Labeling * Was any preprocessing/cleaning/labeling of the data done (, discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? Yes, see <ref>. * Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (, to support unanticipated future uses)? Yes. We provide raw data entries and their annotations respectively. * Is the software used to preprocess/clean/label the instances available? No. The annotation software is the private labeling platform provided by DATATANG TECHNOLOGY INC. . * Any other comments? None. * Uses * Has the dataset been used for any tasks already? No, the dataset is newly proposed by us. * Is there a repository that links to any or all papers or systems that use the dataset? Yes, we provide the link to all related information on our https://motiioncritic.github.io/project page. * What (other) tasks could the dataset be used for? This dataset could be used for other research topics, including but not limited to human preference study, human motion study. * Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? See <ref> for details. * Are there tasks for which the dataset should not be used? The usage of this dataset should be limited to the scope of human motion. * Any other comments? None. * Distribution * Will the dataset be distributed to third parties outside of the entity (, company, institution, organization) on behalf of which the dataset was created? Yes, the dataset will be made publicly available. * How will the dataset be distributed (, tarball on website, API, GitHub)? The dataset will be published on our https://github.com/ou524u/AlignHPcode website with its https://drive.google.com/file/d/1WnBI8UDCINnv1LHAtsNZJ6QY2tRehUdG/view?usp=drive_linkmetadata document. * Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? We release our benchmark under CC BY-NC 4.0 [<https://paperswithcode.com/datasets/license>] license. * Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No. * Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No. * Any other comments? None. * Maintenance * Who is supporting/hosting/maintaining the dataset? Haoru Wang is maintaining. * How can the owner/curator/manager of the dataset be contacted (, email address)? ou524u@stu.pku.edu.cn * Is there an erratum? Currently, no. As errors are encountered, future versions of the dataset may be released and updated on our website. * Will the dataset be updated (, to correct labeling errors, add new instances, delete instances')? Yes, if applicable. * If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (, were individuals in question told that their data would be retained for a fixed period of time and then deleted)? Our human motion dataset is generated as body model parameters <cit.>, not real people. No applicable limits on retention of the data and the annotators are aware of the use of data. * Will older versions of the dataset continue to be supported/hosted/maintained? Yes, older versions of the benchmark will be maintained on our website. * If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? Yes, please get in touch with us by email. * Any other comments? None. § DETAILS ON : AS MOTION QUALITY METRIC Data Pre-processing. Each multiple-choice question is divided into three ordered preference pairs. Motion sequences are parameterized using SMPL <cit.>, which includes 24 axis-angle rotations and one global root translation. Training and Evaluation. We train the critic model from scratch using the DSTformer <cit.> backbone with 3 layers and 8 attention heads on . To ensure robustness, we train our model for multiple times and report the error bars, considering variations such as the random seed across multiple runs. Evaluation results, detailing action-label splits, are presented in the following two tables. Our gets the best results and can robustly score different types of human motions. § DETAILS ON : AS TRAINING SUPERVISION §.§ Fine-tuning Critic Score Clipping. Generally, a higher score indicates better motion quality. However, this relationship has an upper limit. During our fine-tuning process, we clip motions with reward scores exceeding a threshold τ when computing gradients before back-propagation. This threshold, determined through a series of comparative experiments, is set at τ = 12.0, approximately the upper bound of ground-truth critic scores. We found that this setting yields the best results. Fine-tuned motion generation models without reward clipping tend to artificially inflate reward scores on a few specific motions, which increases the average score but degrades overall performance. Thus, reward clipping is essential to maintain the integrity and quality of the fine-tuning process. Finetuning Details. Inspired by <cit.>, we observe how the critic score changes over denoising steps to identify the optimal time window for ReFL intercept. As shown in <ref>(A), we set the hyperparameter step sampling range to [T_1, T_2] = [700, 900], where the critic score witnesses a rapid increase. <ref>(B) illustrates the variation in the average critic score of a training batch over the course of fine-tuning steps. The fine-tuning process is stable and quick to take effect. §.§ Results Improved Critic Score. As shown in <ref>, the critic score increases after supervised fine-tuning. This scatter plot collects all data points from the test set, with the critic score of motions before fine-tuning on the x-axis and the critic score of the corresponding motions after fine-tuning on the y-axis. As demonstrated in <Ref>, we first compare results with and without critic model supervision. In the latter case, the original MDM loss is used for continued training without our -based plug-and-play module. The scatter plot clearly indicates that the results with critic model supervised fine-tuning achieve significantly higher scores. The second experiment in <Ref> examines different fine-tuning steps using 800 steps from the first set as a baseline. The results demonstrate that critic model supervised fine-tuning consistently improves the critic score throughout the fine-tuning process. Improved Motion Quality. We conduct an independent user study to compare motion pairs generated at various fine-tuning stages and calculate the Elo Rating <cit.>. <Ref> demonstrates that the quality of motions consistently enhance as fine-tuning advances, as indicated by the user study. This improvement aligns with the training objective of elevating the critic score. We further inspect the change of different metrics during the fine-tuning process in <ref>(B). PFC <cit.> and FID are expected to be negatively correlated with motion quality (the smaller, the better), and and multimodality are expected to be positively correlated (the greater, the better). The results indicate that existing motion quality metrics (FID, PFC) do not adequately reflect human preference, as they poorly correlate with Elo ratings from user studies. Meanwhile, improving the critic score does not necessarily conflict with the multimodality metric, which models the diversity of generated motions. § DETAILS ON USER STUDIES AnnotationWe conduct user studies on GT subsets grouped from HumanAct12 <cit.> and motions generated during finetune steps as discussed in the main text. Our user study platform is shown in Fig <ref>. In user study, one motion pair of two motions are played simultaneously, with their critic scores and text prompts being hidden. Annotators should choose the better motion or choose "Almost the Same" if they can't make a decision. We perform user study on 5 different finetune steps and 5 GT batches grouped from HumanAct12 <cit.>. Win-rates After annotation, we calculate win-rates of subsets pairs. In user study, each subset has the same amount of motions. Given subsets pair (A,B), win-rates shows the percentage of motion pairs where motion of subset A win over motion of subset B in naturalness. Then we paint heatmaps of all subsets with their win rates. Since the result of one match maybe tie, the sum of win-rates of two subsets in a pair and data in symmetric positions of heatmap might be less than 1. Elo Rating <cit.> After annotation, we calculate elo rating of each subsets as follows: Suggest R_A,R_B are the initial ratings of two compared subsets A and B. The expectated win rate of subset9s A and B, denoting as E_A, E_B can be calculated as follows: E_A = 1/1 + 10^(R_B - R_A) / 400 E_B = 1/1 + 10^(R_A - R_B) / 400 The new ratings of subsets A and B are: R_A^' = R_A+K(S_A-E_A) R_B^' = R_B+K(S_B-E_B) where K is rating coefficient, we choose 32; and S is real score, which is 1 for winner, 0 for loser and 0.5 if the result is a tie. We set the initial rating of each subset as 1500.
http://arxiv.org/abs/2407.01818v1
20240701213540
Predicting public market behavior from private equity deals
[ "Paolo Barucca", "Flaviano Morone" ]
q-fin.CP
[ "q-fin.CP" ]
http://arxiv.org/abs/2407.03217v1
20240703154548
MHNet: Multi-view High-order Network for Diagnosing Neurodevelopmental Disorders Using Resting-state fMRI
[ "Yueyang Li", "Weiming Zeng", "Wenhao Dong", "Luhui Cai", "Lei Wang", "Hongyu Chen", "Hongjie Yan", "Lingbin Bian", "Nizhuan Wang" ]
cs.CV
[ "cs.CV" ]
Impact of planar defects on the reversal time of single magnetic domain nanoparticles Peter M. Derlet July 8, 2024 ===================================================================================== § ABSTRACT Background: Deep learning models have shown promise in diagnosing neurodevelopmental disorders (NDD) like ASD and ADHD. However, many models either use graph neural networks (GNN) to construct single-level brain functional networks (BFNs) or employ spatial convolution filtering for local information extraction from rs-fMRI data, often neglecting high-order features crucial for NDD classification. Methods: We introduce a Multi-view High-order Network (MHNet) to capture hierarchical and high-order features from multi-view BFNs derived from rs-fMRI data for NDD prediction. MHNet has two branches: the Euclidean Space Features Extraction (ESFE) module and the Non-Euclidean Space Features Extraction (Non-ESFE) module, followed by a Feature Fusion-based Classification (FFC) module for NDD identification. ESFE includes a Functional Connectivity Generation (FCG) module and a High-order Convolutional Neural Network (HCNN) module to extract local and high-order features from BFNs in Euclidean space. Non-ESFE comprises a Generic Internet-like Brain Hierarchical Network Generation (G-IBHN-G) module and a High-order Graph Neural Network (HGNN) module to capture topological and high-order features in non-Euclidean space. Results: Experiments on three public datasets show that MHNet outperforms state-of-the-art methods using both AAL1 and Brainnetome Atlas templates. Extensive ablation studies confirm the superiority of MHNet and the effectiveness of using multi-view fMRI information and high-order features. Our study also offers atlas options for constructing more sophisticated hierarchical networks and explains the association between key brain regions and NDD. Conclusion: MHNet leverages multi-view feature learning from both Euclidean and non-Euclidean spaces, incorporating high-order information from BFNs to enhance NDD classification performance. § INTRODUCTION Autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) are two typical neurodevelopmental disorders (NDD) facing specific or sometimes overlapping challenges. Individuals with ASD often exhibit early-onset difficulty in communication and reciprocal social interactions alongside the repetitive and restricted sensory-motor behaviors <cit.>. On the other hand, individuals with ADHD often show persistent patterns of inattention, hyperactivity, and impulsivity that interfere with daily functioning or development <cit.>. Currently, the majority of clinical diagnoses of NDD mainly rely on the subjective assessment of abnormal behavior by clinical experts, which may lead to limited accessibility and inconsistent or delayed diagnosis. This highlights the need for more objective computer aided imaging diagnostic methods <cit.>. As a non-invasive imaging modality, functional magnetic resonance imaging (fMRI) has been adopted for the diagnosis of NDD in previous research studies <cit.>. Decoding fMRI data using machine or deep learning can provide valuable insights into the abnormality related to NDD including altered neural activity and brain connectivity that are associated with cognitive and executive function.<cit.>, by which the disease-specific changes within the brain can be revealed <cit.>. Abnormalities or dysfunctions in large-scale brain functional networks (BFNs) can be reflected by altered functional connectivity (FC) derived from resting-state fMRI (rs-fMRI) data. <cit.>. Numerous studies have discovered that brain functional impairment of NDD patients is related to abnormal FC between resting-state networks (RSNs) <cit.>. For instance, some studies have found significant differences in the connectivity between the default mode network (DMN) and other brain regions in patients with NDD, which may be associated with the deficits in social behavior and cognitive functions <cit.>. However, most existing studies only focus on whole brain functional network (BFN) analysis and do not consider the hierarchical structure of the spatial topology of BFNs. This whole BFN analysis does not fully capture the complex changes in brain function of NDD patients. BFNs exhibit significant abnormalities not only in whole BFN but also in local sub-networks, which require multi-scale analytical approaches <cit.> for constructing networks with hierarchical structure. Encoding both the Euclidean and non-Euclidean space features based on deep learning can reveal the complementary complex information of the BFNs. The convolutional neural networks (CNNs) can automatically extract the Euclidean space features of BFNs <cit.>, which requires the involvement of local receptive fields encoding the values of connectivity weights of FC. The graph neural networks (GNNs) <cit.> has emerged as an attractive framework for modeling non-Euclidean space features of BFNs due to their powerful topological graph embedding capabilities <cit.>. The integration of CNNs and GNNs allows for comprehensive multi-scale feature extraction, enhancing the ability of deep learning model to generalize across different individuals and improve the NDD prediction performance. However, simply applying CNNs or GNNs only utilizes the first-order features and cannot capture the high-order information in BFNs <cit.>, which limits the ability of generalization and the accuracy of NDD prediction of the deep learning models. In contrast, high-order features are crucial for revealing the complex relationships and indirect connections of the BFNs <cit.>. Capturing the high-order information of BFNs can potentially improve the generalization ability of the model. In this paper, we proposed a novel multi-view high-order network (MHNet), consisting of the Euclidean space features extraction (ESFE) module and non-Euclidean space features extraction (Non-ESFE) module, followed by the feature fusion-based classification (FFC) module. (i) The ESFE module is designed to extract connectivity weights information of FC from rs-fMRI data in the Euclidean space, which contains two sub-modules, namely FC generation (FCG) module and HCNN (high-order CNN) module. FCG constructs the FC matrices using different brain atlases and maps the FC to Euclidean space. HCNN captures the Euclidean space features in FC through 1D-CNN. Specifically, the ESFE utilizes the CNN to learn the regular FC information and uses the high-order pooling (HOP) operator <cit.> to formulate the discriminative and representative local features of BFNs. (ii) The non-ESFE module contains two sub-modules, namely generic internet-like brain hierarchical network generation (G-IBHN-G) module and HGNN (high-order GNN). To construct a comprehensive hierarchical structure of BFNs, we propose the G-IBHN-G module, consisting of Brain-WAN, Brain-MAN, and Brain-LAN components. The G-IBHN-G module encompasses all multi-scale hierarchical BFNs. This module combines automatic anatomical labeling 1 (AAL1), Brainnetome atlas, and Yeo's 7-network parcellation <cit.>. This strategy generates multi-scale hierarchical BFNs from rs-fMRI data <cit.>, which is inspired by the internet classified brain hierarchical network (IBHN). Then, the non-ESFE uses residual Chebyshev Networks (ChebNet) model to capture the first-order graph features within the brain hierarchical network and uses the graph high-order pooling (GHOP) operator <cit.> to capture high-order topological features. (iii) Finally, the high-order features from ESFE and non-ESFE modules are further fused in feature fusion-based classification (FFC) module to learn complementary multi-view information for NDD classification. § RELATED WORK This section systematically reviews the relevant works on NDD diagnosis based on fMRI data, examining methods from traditional machine learning to deep learning. §.§ Traditional Machine Learning-based NDD Diagnosis Traditional machine learning methods have been extensively applied in the diagnosis of NDD using fMRI data. These methods typically involve manual feature extraction, feature selection, and classification. Commonly used algorithms include support vector machine (SVM), random forests (RFs), and k-Nearest neighbors (kNN) <cit.>. SVM has been widely used due to its effectiveness in binary classification tasks such as distinguishing the brain patterns of NDD and health control <cit.>. RFs and kNN offer robustness and flexibility in handling noisy and complex data. They are employed to classify ADHD patients using rs-fMRI data, showing promising accuracy and interpretability <cit.>. Traditional machine learning methods heavily depend on the quality of feature extraction and selection, require domain expertise, and is relatively time consuming. Advances like recursive feature elimination and principal component analysis have improved model performance, but the complexity and heterogeneity of NDD still pose challenges, necessitating more advanced techniques <cit.>. §.§ GNN-based NDD Diagnosis GNNs are powerful tools for modeling the topology of BFNs by capturing the complex interactions between brain regions <cit.>. For instance, park et al. <cit.> proposed a deep learning model that utilizes a residual graph convolution network (GCN) with spatio-temporal features extracted from 4D fMRI to improve the classification accuracy of ASD. This study focuses on the dynamic FC between superior temporal sulcus and visual cortex. Integrating multiple layers of GCNs can learn hierarchical representations of brain networks characterizing the multi-scale brain dynamics, by which local and global connectivity patterns can be effectively differentiated between ASD and healthy control <cit.>. Jiang et al. <cit.> introduced a hierarchical GCN for learning graph embeddings from brain networks to predict brain disorders. This study uses rs-fMRI data to construct brain networks as graphs and enhances the prediction accuracy through hierarchical representation learning. A significant advantage of GNNs is their ability to provide interpretable models. By analyzing the learned node features and edge weights, researchers can gain insights into the encoded information of specific brain regions and connections. Li et al. <cit.> proposed the BrainGNN which is an interpretable GNN framework to analyze fMRI and discover neurological biomarkers. This method incorporates novel ROI-aware graph convolutional layers and ROI-selection pooling layers to improve the prediction accuracy and interpretability for neurological disorder diagnosis. In addition, GNNs are capable of encoding nuanced alterations of brain connectivity related to NDD according to the study from <cit.>. Yang et al. <cit.> proposed a new method named Pearson's correlation-based spatial constraints representation to estimate the BFNs. In this method, the BFNs were fed into a graph attention network for ASD diagnosis. The development of advanced GNN architectures continues to enhance their applicability and performance, making them a promising tool in the field of NDD diagnosis. §.§ CNN-based NDD Diagnosis CNNs are particularly effective in capturing fine-grained spatial patterns in Euclidean space with local receptive fields <cit.>. Kawaharaa et al. <cit.> proposed the BrainNetCNN to predict NDD from structural brain networks of infants, and introduced a special structure with edge-to-edge, edge-to-node, and node-to-graph convolutional layers which can exploit the topological locality of brain networks. Silva et al. <cit.> developed a CNN-based feature extraction method combining seed correlation, local consistency, and low-frequency amplitude scores for ADHD diagnosis. The 3D CNNs have enhanced capability of capturing volumetric information from fMRI data for NDD diagnosis <cit.>. Mao et al. <cit.> designed a variety of spatiotemporal granularity computing and fusion models, including feature pooling, LSTM, and spatiotemporal convolution to capture the spatiotemporal correlation of rs-fMRI. Although challenges remain, CNNs continue to be a promising tool for advancing the field of neuroimaging based NDD diagnosis <cit.>. § METHOD In this section, we elucidate the architecture of the proposed MHNet for NDD Diagnosis. Figure <ref> shows the MHNet framework which consists of three large modules, and each modules containing several smaller modules: (i) the ESFE module including the components of FCG module and HCNN module; the non-ESFE module including generic G-IBHN-G and three parallel HGNN modules; and FFC module fusing the Euclidean and non-Euclidean features for NDD prediction. §.§ Overview of MHNet As shown in Figure <ref>, the preprocessed rs-fMRI data are transformed into multi-view data through FC generation module and G-IBHN-G module. The transformed data comprise four different views: FC matrix, brain wide area network (Brain-WAN), brain metropolitan area network (Brain-MAN), and brain local area network (Brain-LAN). The Brain-WAN view primarily focuses on the functional connectivity and node features of the entire brain. The Brain-MAN view emphasizes the connectivity and nodel features of several sub-networks, and the Brain-LAN view delves into the local FC within a sub-network. This multi-view data representation characterizes the complex information of BFN with perspectives from whole brain to a single region. The ESFE module is dedicated to extracting high-order features of the brain in Euclidean space. The FC matrix is the input of HCNN, and the first-order embeddings, high-order embeddings, and syncretic high-order features are the output. The Non-ESFE module is dedicated to extracting high-order graph features of the brain in non-Euclidean space. The graph structures constructed by Brain-WAN, Brain-MAN and Brain-LAN are used in parallel through three HGNNs to obtain the corresponding first-order graph embedding, high-order graph embedding, and syncretic high-order graph features. Finally, the syncretic high-order features and syncretic high-order graph features are fused through the FFC module to obtain the integrated high-order features which contain the multi-view complementary information of BFNs for NDD diagnosis. §.§ Non-ESFE §.§.§ G-IBHN-G To encode the hierarchical structure of the BFNs, we introduce the G-IBHN-G module which encompasses the components of Brain-WAN, Brain-MAN, and Brain-LAN. At the highest level, Brain-WAN encodes the whole BFN consisting the sub-networks of visual network (VN), somatomotor network (SMN), dorsal attention network (DAN), ventral attention network (VAN), limbic network (LN), frontoparietal network (FPN), and DMN. These sub-networks are derived from the Yeo's 7-network parcellation <cit.>. At the intermediate level, Brain-MAN encodes each sub-network. Taking the DMN as an example, it can be divided into finer sub-networks, such as the frontal lobe (FL) and temporal lobe (TL). At the lowest level, Brain-LAN encodes the regions (nodes) within a finer sub-network. For instance, considering the TL, it can be divided into regions including SFG_L_7_2, SFG_L_7_3, MFG_L_7_5 based on the Brainnetome Atlas <cit.>, or Temporal_Sup_L, Temporal_Sup_R, Temporal_Mid_L, Temporal_Mid_L, Temporal_Inf_L based on AAL1. At each level of the analysis in G-IBHN-G module, the strength of FC between sub-networks (or regions) at the same level is estimated by calculating a RV coefficient <cit.> defined as follows: RV(A,B)=Tr(AA^'BB^')/√(T r[(AA^')^2]Tr[(BB^')^2]), where A and B are n × p and n × q matrices representing two brain regions, n is the number of samples in the rs-fMRI time series, p and q are the numbers of voxels in the regions of A and B respectively. A^' and B^' are the transpose of matrix A and B respectively, Tr(A) is the trace of the matrix A. §.§.§ HGNN Shallow GNNs have limitation in representing complex network and long-range dependencies, as they typically only capture local node features and neighborhood information <cit.>. This limitation hinders performance on tasks with diverse FC information for brain disease classification. Although deeper GNNs better capture intricate network structures, they often suffer from over-smoothing, resulting in indistinguishable node features and degrading model performance <cit.>. The combination of Chebyshev filters and residual connections in ChebNet helps mitigate the over-smoothing problem, which preserves part of original input information at each layer and maintains the diversity of node features across layers <cit.>. This strategy ensures that the model retains unique characteristic of each node while still benefits from the depth of network. Our HGNN is able to hierarchically learn the multi-view high-order topological features in non-Euclidean space. It comprises ChebConv blocks, adaptive feature maps (AFM), GHOP layer, and multilayer perceptron (MLP). Within each ChebConv block, input features pass sequentially through the ChebConv layer, followed by a batch normalization layer, ReLU activation layer, and dropout layer. Each layer within the ChebConv block accepts node features from the preceding layer and produces updated node features for the subsequent layer. AFM effectively harnesses the features from each ChebConv block to derive multi-scale node feature embeddings. Spectral-based graph convolution combines the overall graph structure with its individual components using the Chebyshev spectral graph convolution operator <cit.>. This method defines the convolution of a signal h∈ℝ^m (where each node has a scalar value) with a filter g_θ=diag(θ), parameterized by θ∈ℝ^m: g_θ* h=Ug_θ(Λ)U^⊤ h, where * is the convolution operator on graph. The matrix U, composed of the eigenvectors of the Laplace matrix L = I-D^-1/2AD^-1/2 = UΛ U^T, diagonalizes L as UΛ U^T, where I denotes the identity matrix, Λ represents the diagonal matrix containing the eigenvalues of L, and D signifies the degree matrix derived from the adjacency matrix A of the graph. To alleviate the computational complexity associated with computing Ug_θ(Λ)U^⊤, we approximate g_θ(Λ) using K-order Chebyshev polynomials. expressed as: g_θ* h≈∑_k=0^K-1θ_kT_k(L̃)h, We define the hierarchically structured brain graphs as G_WAN = (V_w,A_w), G_MAN = (V_m,A_m), G_LAN = (V_l,A_l) where each node represents a sub-graph (or region at the lowest level analysis in G-IBHN-G module) of the corresponding network, i.e. V_w = (v_1,…,v_n) The node features in G_WAN = (V_w,A_w) can be expressed as the matrix H_w = [x_1, x_2, …, x_n], where x_n is feature expression associated with v_n. This technique uses the graph's Laplacian eigenbasis to perform convolutions in the frequency domain, capturing both local and global information. In this work, we define the output features of the l-th ChebConv as: H_w^(l+1) = ∑_k=0^K-1θ_k^(l) T_k(L̃) H_w^(l), where L̃ is the rescaled graph Laplacian, θ_k are the trainable parameters, T_k(L̃)=2L̃T_k-1(L̃)-T_k-2(L̃) with T_k(L̃)=1 and T_1(L̃)=0. The final multi-scale graph embedding Z_w is obtained by aggregating the graph embeddings of all ChebConv blocks using AFM, expressed as: Z_w = ∑_ls^(l)⊙H_w^(l), where s^(l)(l ∈{0,1,2}) represents the trainable weight following a softmax distribution, defined as: s^(l) = Softmax(r^(l)) = exp(r^(l))/∑_lexp(r^(l)), where r^(l) are the learnable weights with random initalization. The learnable weights are implemented through learnable parameter matrices. AFM introduces an adaptive weighting mechanism that dynamically adjusts the weight of each layer of features through a learnable parameter matrix, so that the feature contribution of each node at different scales can be adaptively adjusted, which effectively avoids the problem of over-smoothing of features. In the non-Euclidean space of BFN, GHOP captures high-order statistical information, which enables more complex and nuanced feature representations compared to traditional first-order pooling methods. The high-order features encapsulate both direct and indirect interactions within the BFN, thus enrich the features with high-order interactions. Taking the Brain-LAN in G-IBHN-G as an example, the GHOP scheme is used to extract high-order graph features. The high-order graph features Z_GHOP can be expressed as: Z_GHOP=Z_l^TZ_l, where Z_l represents first-order features of Brain-LAN, and Z_l^T represents the transpose of first-order features. The expression of the high-order features is: Z̃_l = Concat(Z_l, f_MLP(Z_GHOP)). Similarly, the high-order features Z̃_w and Z̃_m of brain graphs G_WAN = (V_w, A_w) and G_MAN = (V_m, A_m) can also be obtained through the above steps. Finally, the high-order graph features obtained through each layer of the brain network are fused to obtain the high-order graph fused features Z̃_GHOP in non-Euclidean space: Z̃_GHOP = Concat(Z̃_w, Z̃_m, Z̃_l), where Z̃_w, Z̃_m, and Z̃_l represent the high-order features obtained from Brain-WAN, Brain-MAN, and Brain-LAN respectively. §.§ ESFE The aim of our ESFE is to extract discriminative features from FC matrix apart from the features encoded from the non-ESFE. ESFE module contains the components of FCG module and HCNN module. HCNN comprises a dimensionality reduction (DR) layer, 1D-CNN layer, high-order pooling (HOP) Layer, and MLP. In ESFE module, the FC matrix is denoted as C∈ℝ^N× N. The DR layer extracts the upper triangular of the FC matrix and concatenates the element values of each row in row order to form a one-dimensional feature. Subsequently, this one-dimensional feature is input into two layers of 1D-CNN and MLP, resulting in the first-order features Z_fc. 1D-CNN can effectively extract local features and capture local correlations between brain regions. Weight sharing reduces the number of parameters and enhances the robustness of the model. Through multi-layer convolution, 1D-CNN can capture the combined features, by which the complex interactive relationship between different brain regions can be better represented. For the upper triangular matrix of C, we denote a one-dimensional feature vector x∈ℝ^k, where k=N(N-1)/2. The convolution operation is performed by a one-dimensional convolution kernel to extract local features. Assume the convolution kernel size is n, the output feature map can be expressed as: y_i = ∑_j=0^n-1w_jx_i+j+b, where w_j is the weight of the convolution kernel and b is the bias term. The MLP outputs the features Z_fc, where the transformation of each layer can be expressed as: z^(l+1) = σ(W^(l)z^(l)+b^(l)), where W^(l) and b^(l) are the weights and biases of the lth layer, and σ is the activation function. For calculating high-order representation, we define the HOP operator as follow: Z_HOP = Z_fc^TZ_fc, where Z_HOP is a real symmetric matrix. The expression of the final high-order features is: Z̃_HOP = Concat(Z_fc, f_MLP(Z_HOP)), where Z̃_HOP represents high-order fusion features. §.§ FFC Module In this module, the final discriminative features for NDD diagnosis are obtained through the fusion of high-order features Z̃, which can be expressed as: Z̃ = Concat(Z̃_GHOP, Z̃_HOP), The predictor comprises a MLP followed by a softmax function to generate a probability vector indicating the presence or absence of the disease for each subject. The prediction equation is as follows: ŷ=Softmax(MLP(Z̃)), where MLP denotes a multi-layer perceptron that processes the concatenated features Z̃. The softmax function then converts the MLP output into a probability vector indicating the likelihood of the presence or absence of the disease. Cross-entropy loss is employed to train the entire model. ℒ=-1/N∑_i=1^N[y_ilog(ŷ_i)+(1-y_i)log(1-ŷ_i)], where N is the number of subjects, y_i is the true label for the i-th subject, and ŷ_i is the predicted probability for the i-th subject. § EXPERIMENTS §.§ Datasets and Preprocessing The Autism Brain Imaging Data Exchange I (ABIDE-I) was released in August 2012, and it is the first initiative of the ABIDE project, involving 17 international research sites sharing rs-fMRI, anatomical, and phenotypic data. ABIDE-I has been extensively used in research. It includes 1112 subjects, with 539 subjects of ASD and 573 of typical controls. The ages of subjects are within 7-64 years old. For more information about collection parameters and site distribution, see the web page <https://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html>. The Autism Brain Imaging Data Exchange II (ABIDE-II), supported by the National Institute of Mental Health, was established to build on ABIDE-I's success in aggregating MRI data across 19 sites. To meet the requirement of larger data samples, ABIDE-II has collected the data from 1114 subjects, with 521 of ASD and 593 health controls, aged 5-64 years. ABIDE-II characterizes both the complexity of the connectome and the heterogeneity of ASD with enhanced phenotypic details and associated symptoms. Besides, it also includes longitudinal data from 38 individuals at two time points. For more information about collection parameters and site distribution, see the web page <https://fcon_1000.projects.nitrc.org/indi/abide/abide_II.html>. The ADHD-200 is a publicly available multi-site neuroimaging dataset designed to facilitate the study of ADHD. The ADHD-200 is a collaboration of 8 international imaging sites that have aggregated neuroimaging data from 362 children and adolescents with ADHD and 585 typically developing controls. These 947 datasets are composed of T1 and rs-fMRI data along with phenotypic information. The ADHD-200 dataset records the details of the ADHD diagnostic types of each subject, including ADHD-I (inattentive), ADHD-C (combined) and ADHD-HI (hyperactive/impulsive). For more information about collection parameters and site distribution, see the web page <http://fcon_1000.projects.nitrc.org/indi/adhd200/>. The number of subjects included in this study and their demographics are given in Table <ref>. The DPARSF is used to preprocess the rs-fMRI data from ABIDE-I and ABIDE-II datasets <cit.> and Athena2 <cit.> pipeline is used to preprocess the rs-fMRI data from ADHD-200 dataset. The preprocessing procedure includes skull stripping, slice timing correction, and motion correction to minimize artifacts. Additionally, nuisance covariates, including signals from white matter, cerebrospinal fluid, and head motion, were regressed out. Next, fMRI images were normalized to Montreal Neurological Institute (MNI) space and underwent spatial smoothing using a Gaussian kernel with a full-width at half-maximum (FWHM) of 6 × 6 × 6 mm3. BOLD signals underwent further processing through band-pass filtering (0.01 ≤ f ≤ 0.1 Hz) to eliminate high-frequency noise unrelated to neural activity and low-frequency drift in MRI scans. AAL1 and Brainnetome Altase templates are two important and widely used brain atlases for neuroimaging analysis. AAL1 partitions the brain into 116 regions, providing a standardized framework for studying brain activity, while the Brainnetome Atlas offers a finer parcellation into 246 regions based on functional and structural connectivity. These templates are used to extract BOLD time series from specific brain regions in fMRI studies. §.§ Experimental Setting The experiments were conducted by using a single server with NVIDIA RTX 3090 GPU. We developed our model using PyTorch, and every algorithm mentioned in this study is capable of running on a single GPU. Adaptive moment estimation (Adam) was employed for network optimization. For ABIDE-I (NC vs. ASD), the dropout rate is set to 0.3, the learning rate is set to 0.0001 and the maximum number of epochs is set to 240, the cutoff threshold is set to 19.03%. For ABIDE-II (NC vs. ASD), the dropout rate is set to 0.25, the learning rate is set to 0.0001 and the maximum number of epochs is set to 200, the cutoff threshold is set to 17.14%. For ADHD-200 (NC vs. ADHD), the dropout rate is set to 0.3, the learning rate is set to 0.0001 and the maximum number of epochs is set to 300, the cutoff threshold is set to 10.23%. The results of classification are averaged over 10 times of cross validated test. Five metrics of AUC, ACC, SEN, SPEC, and AVG are used to evaluate the classification performance. §.§ Competing Methods We conduct a comparative analysis of the proposed MHNet framework against twelve distinct traditional machine learning and deep learning methods. One-dimensional feature of RV coefficient matrix or FC coefficient matrix obtained from different views of BFN are input to the traditional machine learning methods. In order to obtain the best classification performance, we experiment with varying the number of nodes in different hidden layers of the MLP and adjusting the penalty coefficients in the linear SVM. Additionally, we explore different k values (i.e., the number of neighbors) and select appropriate distance metrics (such as Euclidean or Manhattan distance) in KNN. In addition to the traditional machine learning method, we list the comparative deep learning methods below: * BrainGNN<cit.>: BrainGNN is an end-to-end graph neural network designed for fMRI analysis, capable of accurately identifying significant brain regions to decode task states and detect biomarkers. BrainGNN has been released at <https://github.com/xxlya/BrainGNN_Pytorch>. * AL-NEGAT<cit.>: AL-NEGAT (adversarial learning-based node-edge graph attention network) is a model designed to classify brain disease using both structural and functional MRI data. AL-NEGAT leverages both node and edge features through an attention-based mechanism and adversarial training methods to improve robustness and interpretability. AL-NEGAT has been released at <https://github.com/XiJiangLabUESTC/Node-Edge-Graph-Attention-Networks>. * MVS-GCN<cit.>: MVS-GCN is a graph neural network guided by prior brain structure learning, which integrates graph structure learning with multi-task graph embedding and combines brain networks of varying sparsity levels as adjacency matrices for comprehensive feature representation. MVS-GCN has been released at <https://github.com/GuangqiWen/MVS-GCN>. * Hi-GCN<cit.>: Hi-GCN is an advanced neural network framework designed for learning graph embeddings from brain networks. It is specifically tailored to enhance accuracy of brain disorder prediction by considering both individual brain networks and the relationships between subjects in a population networks. Hi-GCN has been released at <https://github.com/haojiang1/hi-GCN>. * BrainNetCNN<cit.>: BrainNetCNN is a convolutional neural network framework designed to predict clinical NDD based on brain networks. This method leverages novel convolutional filters that utilize the topological locality of brain networks. BrainNetCNN has been released at <https://github.com/nicofarr/brainnetcnnVis_pytorch>. * BrainNetTF<cit.>: BrainNetTF models brain networks as graphs with nodes of fixed size and order. This method proposes an readout operation that results in distinctive cluster-aware node embeddings and informative graph embeddings. BrainNetTF has been released at <https://github.com/Wayfear/BrainNetworkTransformer>. * MAHGCN<cit.>: MAHGCN is a framework for brain disorder diagnosis that utilizes multiscale brain atlases to construct hierarchical FCNs. It combines GCNs with atlas-guided pooling to extract and integrate multiscale topological features, significantly enhancing the accuracy of brain disorder predictions. MAHGCN has been released at <https://github.com/MianxinLiu/MAHGCN-code>. * Com-BrainTF<cit.>: Com-BrainTF is a transformer architecture for brain network analysis that incorporates community-specific information to enhance the accuracy and interpretability of fMRI data analysis. It features a hierarchical local-global transformer design that efficiently learns intra- and inter-community node embeddings. Com-BrainTF has been released at <https://github.com/ubc-tea/Com-BrainTF>. §.§ Evaluation Metrics We evaluated the effectiveness of the MHNet framework by measuring five metrics: accuracy (ACC), sensitivity (SEN), specificity (SPEC), the area under the curve (AUC), and the average score (AVG) for each metric accordingly. In order to mitigate bias from a singular dataset split, we employed a 10-fold cross-validation approach during the evaluation phase. § RESULTS §.§ Classification Performance The classification results of using AAL1 atlas and Brainnetome atlas are shown in Table <ref> and Table <ref> respectively. Our proposed MHNet achieves the highest mean accuracy of 76.29%, 75.16%, 70.33% on the datasets of ABIDE-I, ABIDE-II, and ADHD-200 respectively using the Brainnetome atlas, which is approximately 4%–8% higher than other state-of-the-art methods. The two ML methods show the worst performance. The original GCN exhibits the worst performance in the category of deep learning methods for ASD or ADHD classification. §.§ Ablation Study §.§.§ Influence of Cutoff Threshold For HGNN, we use the obtained RV coefficient matrix to construct the adjacency matrix of the brain hierarchical network of each data set, but the RV coefficient matrix is not a sparse binary matrix. In order to achieve the best classification effect, threshold processing is required for the matrix. Specifically, if the RV coefficient between brain area i and brain area j is less than the cutoff threshold T, the value at (i, j) of the RV coefficient matrix is set to 0, otherwise it is set to 1. As shown in Figure <ref>, in order to select a suitable cutoff threshold, we plotted the relationship curve between the percentage of retained edges and the cutoff threshold, and selected the inflection point of the curve as the cutoff point. The cutoff threshold at this point is used to process the RV coefficient matrix into a sparse binary adjacency matrix. Figure <ref> shows the classification effect that MHNet can achieve when taking different cutoff thresholds. We can see that the best classification effect is achieved when the inflection point of the relationship curve is selected. §.§.§ Influence of Brain Atlas Selection In order to study the robustness of MHNet with respect to selecting different brain atlas, we use the atlases of AAL1 and Brainnetome to construct graphs with different numbers of nodes. The classification results are shown in Figure <ref>. Our proposed MHNet model is built based on the hierarchical structure of BFN. The finer the hierarchical structure is, the richer the brain FC and spatial information that the model can utilize. As we can see in Figure <ref>, Brainnetome atlas show superior classification performance than AAL1. This is because that higher resolution and more detailed brain parcellation are conducive to building the brain hierarchical structure with more sophisticated information. In addition, Brainnetome atlas derived from multimodal data takes FC inforamtion into account, so that the parcellation is in line with the actual BFN. §.§.§ Influence of GNN Encoder To study the impact of different GNN encoders on the classification performance of MHNet, we replaced the Res-ChebNet blocks with GCN and ChebNet, and used Brainnetome atlas to evaluate the performance on three datasets. As shown in Figure <ref>, Res-ChebNet outperforms GCN and ChebNet on all three datasets. This means that Res-ChebNet has higher efficiency and accuracy in capturing and processing complex BFNs. §.§.§ Influence of Multi-view and High-order Feature Extractors In order to verify the effectiveness of our proposed MHNet based on multi-view and high-order feature representation, we conducted ablation experiments on the multi-view and high-order feature representation constructed by the brain hierarchical network. As shown in Table <ref>, without encoding high-order features and multi-view information, GNN (Only Brain-LAN) demonstrates basic performance, and all evaluation indicators are relatively low. Compared with GNN (Only Brain-LAN), the performance of GNN is improved, which means that the hierarchical structure of BFN can capture more useful features. Based on GNN, HGNN is proposed to encode high-order features, which significantly improves the performance of the model, especially in capturing the complex FC of the brain. Apart from HGNN, we also add CNN which encodes the FC information of the brain in Euclidean space, resulting in a HGNN + CNN model. Although this model introduces the Euclidean information in FC, the performance improvement is not as significant as that of HGNN. The high-order features may play a more important role in capturing complex activity patterns of the brain. HGNN + HCNN shows the best performance with respect to all evaluation indicators, indicating the effectiveness of introducing the complementary multi-view high-order features. § DISCUSSION §.§ Interpretability Analysis We first discuss the important brain regions associated with NDD when using MHNet. By integrating HGNN and HCNN, the brain regions most relevant to NDD can be effectively identified. HGNN module encodes both node features and connectivity, by which the most representative brain regions and connections related to the NDD in the classification tasks can be identified. In HCNN module, the local brain region features with strongest response to NDD classification are identified by analyzing the convolutional layer activation map. The top 10 identified brain regions which are most relevant to the NDD are visualized in Figure <ref> for experimental verification on ABIDE-I, ABIDE-II, and ADHD-200 datasets. ASD involves abnormalities in multiple brain regions, including the lingual gyrus, emporal inferior gyrus, hippocampus, amygdala, calcarine fissure, and occipital middle gyrus <cit.>. The lingual gyrus plays an important role in visual processing, especially in facial recognition and emotional understanding. The abnormality of lingual gyrus may lead to difficulties in visual information integration and social interaction for patients with ASD <cit.>. The inferior temporal gyrus and the temporal middle gyrus play an important role in language processing and social cognition, and their functional abnormalities may explain the language comprehension and episodic memory impairments for patients with ASD <cit.>. Abnormalities in the hippocampus and parahippocampal gyrus may lead to memory and spatial cognition problems. While abnormalities in amygdala are associated with the problems in emotional processing and social behavior regulation <cit.>. Abnormalities in the calcarine fissure and the occipital middle gyrus may affect primary and higher-order processing of visual information, which explains the challenges that ASD patients face in processing visual social cues <cit.>. In addition, regions such as the insula, precuneus, fusiform gyrus, parietal superior lobule, and caudate nucleus also show significant abnormalities in ASD patients. The insula plays an important role in emotion processing and interoception, and its functional changes may lead to difficulties in emotion understanding and self-awareness for ASD patients <cit.>. The precuneus is related to self-reflection and social cognition, and its abnormalities may affect self-awareness and understanding of others in social interactions. The fusiform gyrus plays an important role in facial recognition, and its abnormalities may lead to difficulties in ASD patients in recognizing faces and interpreting facial expressions. Abnormalities in the superior parietal lobule may affect spatial cognition and visual-spatial processing, which explains the challenges of ASD patients in spatial navigation and task switching. The caudate nucleus plays an important role in motor control, learning, and executive function, and its functional abnormalities may be related to repetitive behaviors, narrow interests, and executive dysfunction in ASD patients <cit.>. The combined changes in these brain regions lead to a wide range of challenges in cognitive, emotional, and social functions for ASD patients. ADHD involves abnormalities in multiple brain regions, including the left hippocampus and left parahippocampal gyrus, which play an important role in memory formation and emotion regulation, and their abnormalities may lead to deficits in working memory and episodic memory <cit.>. The right amygdala plays an important role in emotion processing and response control, and its functional abnormalities may lead to emotional overreaction and impulsive behavior <cit.>. Abnormalities in the left calcarine fissure may affect visual attention and information processing. The right caudate nucleus and bilateral thalamus play an important role in motor control and attention regulation, and their abnormalities may lead to excessive movement, impulsive behavior and difficulty in attention regulation <cit.>. Abnormalities in the left middle temporal gyrus may affect language comprehension and social cognition <cit.>. The combined abnormal changes in these areas together explain the wide range of cognitive, emotional and behavioral disorders for ADHD patients. §.§ Brain Feature Analysis The combination of multi-view features of node and connectivity can provide more comprehensive information of BFN in MHNet framework. The FC matrix is estimated by calculating the the similarity of neural activity of different brain regions. CNN was used to encode Euclidean space features of FC matrix in this work. The hierarchical structure of BFN is modeled in non-Euclidean space. By encoding the graph nodes features and edges, GNN can more naturally capture the complex topological characteristics of the BFN <cit.>. The combination of CNN and GNN can not only capture the node and connectivity information of global BFN, but also characterize the local hierarchical structure of BFN. Multi-view high-order features delineate the intricate and abstract patterns of BFN by integrating complementary features from both Euclidean and non-Euclidean spaces, thereby enhance the robustness of MHNet and improve the classification accuracy. §.§ Non-imaging Data In this work, our MHNet model achieves significant diagnostic results. However, our proposed MHNet model does not take into accout the non-imaging data. The phenotypic data, such as clinical symptoms and behavioral assessments, play an important role in NDD diagnosis. Fusing non-imaging data into our current framework is an interesting direction that will be explored to improve its diagnostic performance in future study. By integrating phenotypic information <cit.>, models can capture more pathological features and individual differences, resulting in more accurate diagnosis performance. § CONCLUSION The MHNet offers a significant advancement in diagnosing NDD, such as ASD and ADHD using rs-fMRI data. By integrating both GNN and CNN methodologies, MHNet effectively captures hierarchical and high-order feature representations. The inclusion of residual ChebNet in the HGNN module improves gradient flow, enhances feature propagation, and increases model flexibility. The multi-view feature integration and extraction of hierarchical structure of BFNs characterize both global and local topological information of BFN in non-Euclidean and Euclidean space. Our study addresses the limitations of current deep learning models used for diagnosing NDD with rs-fMRI in non-uniform data adaptability and capturing high-level features. Encoding hierarchical structure of BFN and integrating high-order non-Euclidean space and Euclidean features, MHNet has shown superior performance to SOTA on three NDD datasets, demonstrating its powerful feature extraction and classification capabilities. For future research directions, we aim to generalize our framework to diagnose additional brain disorders and explore the integration of MHNet with non-image information for brain disorder diagnosis. nature 999 ref-journal7 Yahata, N.; Morimoto, J.; Hashimoto, R.; et al. A small number of abnormal brain connections predicts adult autism spectrum disorder. Nature Communications 2016, 7(1), 11254. ref-journal8 Leekam, S.R.; Prior, M.R.; Uljarevic, M. Restricted and repetitive behaviors in autism spectrum disorders: a review of research in the last decade. Psychological Bulletin 2011, 137(4), 562. ref-journal9 Kessler, R.C.; Adler, L.A.; Barkley, R.; et al. Patterns and predictors of attention-deficit/hyperactivity disorder persistence into adulthood: results from the national comorbidity survey replication. Biological Psychiatry 2005, 57(11), 1442–1451. ref-journal10 Parens, E.; Johnston, J. Facts, values, and attention-deficit hyperactivity disorder (ADHD): an update on the controversies. Child and Adolescent Psychiatry and Mental Health 2009, 3, 1–17. ref-journal1 Elakkiya, M.K. Novel deep learning models with novel integrated activation functions for autism screening: AutiNet and MinAutiNet. Expert Systems with Applications 2024, 238, 122102. ref-journal2 Mao, Z.; Su, Y.; Xu, G.; et al. Spatio-temporal deep learning method for adhd fmri classification. Information Sciences 2019, 499, 1–11. ref-journal3 Hull, J.V.; Dokovna, L.B.; Jacokes, Z.J.; et al. Resting-state functional connectivity in autism spectrum disorders: a review. Frontiers in Psychiatry 2017, 7, 205. ref-journal4 Kim, S.G.; Ogawa, S. Biophysical and physiological origins of blood oxygenation level-dependent fMRI signals. Journal of Cerebral Blood Flow & Metabolism 2012, 32(7), 1188–1206. ref-journal6 Mueller, S.; Keeser, D.; Reiser, M.F.; et al. Functional and structural MR imaging in neuropsychiatric disorders, part 2: application in schizophrenia and autism. American Journal of Neuroradiology 2012, 33(11), 2033–2037. ref-journal11 Wang, N.; Yao, D.; Ma, L.; et al. Multi-site clustering and nested feature extraction for identifying autism spectrum disorder with resting-state fMRI. Medical Image Analysis 2022, 75, 102279. ref-journal12 Shephard, E.; Tye, C.; Ashwood, K.L.; et al. Oscillatory neural networks underlying resting-state, attentional control and social cognition task conditions in children with ASD, ADHD and ASD+ ADHD. Cortex 2019, 117, 96–110. ref-journal13 Kaboodvand, N.; Iravani, B.; Fransson, P. Dynamic synergetic configurations of resting-state networks in ADHD. Neuroimage 2020, 207, 116347. ref-journal14 Kitzbichler, M.G.; Khan, S.; Ganesan, S.; et al. Altered development and multifaceted band-specific abnormalities of resting state networks in autism. Biological Psychiatry 2015, 77(9), 794–804. ref-journal15 Cerliani, L.; Mennes, M.; Thomas, R.M.; et al. Increased functional connectivity between subcortical and cortical resting-state networks in autism spectrum disorder. JAMA Psychiatry 2015, 72(8), 767–777. ref-journal16 Hoekzema, E.; Carmona, S.; Ramos-Quiroga, J.A.; et al. An independent components and functional connectivity analysis of resting state fMRI data points to neural network dysregulation in adult ADHD. Human Brain Mapping 2014, 35(4), 1261–1272. ref-journal17 Smith, S.M.; Vidaurre, D.; Beckmann, C.F.; et al. Functional connectomics from resting-state fMRI. Trends in Cognitive Sciences 2013, 17(12), 666–682. ref-journal18 Sherkatghanad, Z.; Akhondzadeh, M.; Salari, S.; et al. Automated detection of autism spectrum disorder using a convolutional neural network. Frontiers in Neuroscience 2020, 13, 482737. ref-journal19 Kawahara, J.; Brown, C.J.; Miller, S.P.; et al. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage 2017, 146, 1038–1049. ref-journal20 Yin, W.; Li, L.; Wu, F.X. Deep learning for brain disorder diagnosis based on fMRI images. Neurocomputing 2022, 469, 332–345. ref-journal21 Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 2016. ref-journal66 He, S.; Lu, X.; Gu, J.; et al. RSI-Net: Two-stream deep neural network for remote sensing images-based semantic segmentation. IEEE Access 2022, 10, 34858–34871. ref-journal22 Yu, W.; Lei, B.; Ng, M.K.; et al. Tensorizing GAN with high-order pooling for Alzheimer’s disease assessment. IEEE Transactions on Neural Networks and Learning Systems 2021, 33(9), 4945–4959. ref-journal23 He, S.; Tang, H.; Lu, X.; et al. MSHCNet: Multi-Stream Hybridized Convolutional Networks with Mixed Statistics in Euclidean/Non-Euclidean Spaces and Its Application to Hyperspectral Image Classification. arXiv preprint arXiv:2110.03346 2021. ref-journal68 Chen, X.; Zhang, H.; Gao, Y.; et al. High‐order resting‐state functional connectivity network for MCI classification. Human Brain Mapping 2016, 37(9), 3282–3296. HOP Gao, Z.; Xie, J.; Wang, Q.; et al. Global second-order pooling convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, 3024–3033. ref-journal24 Fan, L.; Li, H.; Zhuo, J.; et al. The human brainnetome atlas: a new brain atlas based on connectional architecture. Cerebral Cortex 2016, 26(8), 3508–3526. ref-journal25 Yeo, B.T.T.; Krienen, F.M.; Sepulcre, J.; et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology 2011. ref-journal26 Huang, S.; Zeng, W.; Shi, Y. Internet-like brain hierarchical network model: Alzheimer's disease study as an example. Computer Methods and Programs in Biomedicine 2021, 211, 106393. ref-journal27 Wang, Z.; Ji, S. Second-order pooling for graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020, 45(6), 6870–6880. ref-journal29 Santana, C.P.; de Carvalho, E.A.; Rodrigues, I.D.; et al. rs-fMRI and machine learning for ASD diagnosis: A systematic review and meta-analysis. Scientific Reports 2022, 12(1), 6030. ref-conference30 Eslami, T.; Saeed, F. Auto-ASD-network: a technique based on deep learning and support vector machines for diagnosing autism spectrum disorder using fMRI data. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 2019, 646–651. ref-journal31 Abraham, A.; Milham, M.P.; Di Martino, A.; et al. Deriving reproducible biomarkers from multi-site resting-state data: An Autism-based example. NeuroImage 2017, 147, 736–745. ref-journal32 Feczko, E.; Balba, N.M.; Miranda-Dominguez, O.; et al. Subtyping cognitive profiles in autism spectrum disorder using a functional random forest algorithm. NeuroImage 2018, 172, 674–688. knn Kang, L.; Chen, J.; Huang, J.; et al. Autism spectrum disorder recognition based on multi-view ensemble learning with multi-site fMRI. Cognitive Neurodynamics 2023, 17(2), 345–355. ref-journal33 Thapar, A.; Cooper, M.; Rutter, M. Neurodevelopmental disorders. The Lancet Psychiatry 2017, 4(4), 339–346. ref-journal34 Park, K.W.; Cho, S.B. A residual graph convolutional network with spatio-temporal features for autism classification from fMRI brain images. Applied Soft Computing 2023, 142, 110363. ref-journal35 Zhang, H.; Song, R.; Wang, L.; et al. Classification of brain disorders in rs-fMRI via local-to-global graph neural networks. IEEE Transactions on Medical Imaging 2022, 42(2), 444–455. ref-journal36 Jiang, H.; Cao, P.; Xu, M.Y.; et al. Hi-GCN: A hierarchical graph convolution network for graph embedding learning of brain network and brain disorders prediction. Computers in Biology and Medicine 2020, 127, 104096. ref-journal37 Li, X.; Zhou, Y.; Dvornek, N.; et al. Braingnn: Interpretable brain graph neural network for fmri analysis. Medical Image Analysis 2021, 74, 102233. ref-journal38 Wen, G.; Cao, P.; Bao, H.; et al. MVS-GCN: A prior brain structure learning-guided multi-view graph convolution network for autism spectrum disorder diagnosis. Computers in Biology and Medicine 2022, 142, 105239. ref-journal39 Yang, C.; Wang, P.; Tan, J.; et al. Autism spectrum disorder diagnosis using graph attention network based on spatial-constrained sparse functional brain networks. Computers in Biology and Medicine 2021, 139, 104963. ref-journal61 Haweel, R.; Shalaby, A.; Mahmoud, A.; et al. A robust DWT–CNN‐based CAD system for early diagnosis of autism using task‐based fMRI. Medical Physics 2021, 48(5), 2315–2326. ref-journal62 De Silva, S.; Dayarathna, S.U.; Ariyarathne, G.; et al. fMRI feature extraction model for ADHD classification using convolutional neural network. International Journal of E-Health and Medical Communications (IJEHMC) 2021, 12(1), 81–105. ref-journal63 Zou, L.; Zheng, J.; Miao, C.; et al. 3D CNN based automatic diagnosis of attention deficit hyperactivity disorder using functional and structural MRI. IEEE Access 2017, 5, 23626–23636. ref-conference64 Eslami, T.; Saeed, F. Auto-ASD-network: a technique based on deep learning and support vector machines for diagnosing autism spectrum disorder using fMRI data. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 2019, 646–651. rv Robert, P.; Escoufier, Y. A unifying tool for linear multivariate statistical methods: the RV-coefficient. Journal of the Royal Statistical Society Series C: Applied Statistics 1976, 25(3), 257–265. ref-journal40 Rusch, T.K.; Bronstein, M.M.; Mishra, S. A survey on oversmoothing in graph neural networks. arXiv preprint arXiv:2303.10993 2023. ref-book41 Hamilton, W.L. Graph Representation Learning; Morgan & Claypool Publishers: 2020. ref-journal67 Tang, H.; He, S.; Yang, M.; Lu, X.; Yu, Q.; Liu, K.; Wang, N. CSC-Unet: a novel convolutional sparse coding strategy based neural network for semantic segmentation. IEEE Access 2024. ref-journal42 Wang, Y.; Wang, H.; Jin, H.; et al. Exploring graph capsule network for graph classification. Information Sciences 2021, 581, 932–950. ref-journal43 Levie, R.; Monti, F.; Bresson, X.; et al. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing 2018, 67(1), 97–109. ref-journal44 Yan, C.G.; Wang, X.D.; Zuo, X.N.; et al. DPABI: data processing & analysis for (resting-state) brain imaging. Neuroinformatics 2016, 14, 339–351. athena Bellec, P.; Chu, C.; Chouinard-Decorte, F.; et al. The neuro bureau ADHD-200 preprocessed repository. NeuroImage 2017, 144, 275–286. ref-journal45 Chen, Y.; Yan, J.; Jiang, M.; et al. Adversarial learning based node-edge graph attention networks for autism spectrum disorder identification. IEEE Transactions on Neural Networks and Learning Systems 2024. ref-journal46 Kan, X.; Dai, W.; Cui, H.; et al. Brain network transformer. Advances in Neural Information Processing Systems 2022, 35, 25586–25599. ref-journal48 Liu, M.; Zhang, H.; Shi, F.; et al. Hierarchical graph convolutional network built by multiscale atlases for brain disorder diagnosis using functional connectivity. IEEE Transactions on Neural Networks and Learning Systems 2023. ref-conference47 Bannadabhavi, A.; Lee, S.; Deng, W.; et al. Community-Aware Transformer for Autism Prediction in fMRI Connectome. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Cham: Springer Nature Switzerland, 2023, 287–297. ref-journal49 Banker, S.M.; Gu, X.; Schiller, D.; et al. Hippocampal contributions to social and cognitive deficits in autism spectrum disorder. Trends in Neurosciences 2021, 44(10), 793–807. ref-journal50 Donovan, A.P.A.; Basson, M.A. The neuroanatomy of autism–a developmental perspective. Journal of Anatomy 2017, 230(1), 4–15. ref-journal51 Mundy, P. A review of joint attention and social‐cognitive brain systems in typical development and autism spectrum disorder. European Journal of Neuroscience 2018, 47(6), 497–514. ref-journal52 Monk, C.S.; Peltier, S.J.; Wiggins, J.L.; et al. Abnormalities of intrinsic functional connectivity in autism spectrum disorders. NeuroImage 2009, 47(2), 764–772. ref-journal53 Bachevalier, J.; Loveland, K.A. The orbitofrontal–amygdala circuit and self-regulation of social–emotional behavior in autism. Neuroscience & Biobehavioral Reviews 2006, 30(1), 97–117. ref-journal54 Löffler, A.; Foell, J.; Bekrater-Bodmann, R. Interoception and its interaction with self, other, and emotion processing: implications for the understanding of psychosocial deficits in borderline personality disorder. Current Psychiatry Reports 2018, 20, 1–9. ref-journal55 Turner, K.C.; Frost, L.; Linsenbardt, D.; et al. Atypically diffuse functional connectivity between caudate nuclei and cerebral cortex in autism. Behavioral and Brain Functions 2006, 2, 1–12. ref-journal56 Peterson, D.J.; Ryan, M.; Rimrodt, S.L.; et al. Increased regional fractional anisotropy in highly screened attention-deficit hyperactivity disorder (ADHD). Journal of Child Neurology 2011, 26(10), 1296–1302. ref-journal57 Frodl, T.; Stauber, J.; Schaaff, N.; et al. Amygdala reduction in patients with ADHD compared with major depression and healthy volunteers. Acta Psychiatrica Scandinavica 2010, 121(2), 111–118. ref-journal58 Ivanov, I.; Bansal, R.; Hao, X.; et al. Morphological abnormalities of the thalamus in youths with attention deficit hyperactivity disorder. American Journal of Psychiatry 2010, 167(4), 397–408. ref-journal59 Kobel, M.; Bechtel, N.; Specht, K.; et al. Structural and functional imaging approaches in attention deficit/hyperactivity disorder: does the temporal lobe play a key role? Psychiatry Research: Neuroimaging 2010, 183(3), 230–236. network Bassett, D.S.; Sporns, O. Network neuroscience. Nature Neuroscience 2017, 20(3), 353–364. ref-journal60 Cai, L.; Zeng, W.; et al. MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction. arXiv preprint arXiv:2406.14455v1 2024.
http://arxiv.org/abs/2407.02724v1
20240703003332
Building a Better B-Dot: Fast Detumbling with Non-Monotonic Lyapunov Functions
[ "Jacob B. Willis", "Paulo R. M. Fisch", "Aleksei Seletskiy", "Zachary Manchester" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Building a Better B-Dot: Fast Detumbling with Non-Monotonic Lyapunov Functions Jacob B. Willis Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 jbwillis@cmu.edu Paulo R.M. Fisch Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 pfisch@andrew.cmu.edu Aleksei Seletskiy Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 aseletsk@andrew.cmu.edu Zachary Manchester Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 zacm@cmu.edu 979-8-3503-0462-6/24/$31.00 2024 IEEE July 8, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== plain plain Building a Better B-Dot: Fast Detumbling with Non-Monotonic Lyapunov Functions Jacob B. Willis Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 jbwillis@cmu.edu Paulo R.M. Fisch Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 pfisch@andrew.cmu.edu Aleksei Seletskiy Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 aseletsk@andrew.cmu.edu Zachary Manchester Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 zacm@cmu.edu 979-8-3503-0462-6/24/$31.00 2024 IEEE July 8, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== plain plain § ABSTRACT Spacecraft detumbling with magnetic torque coils is an inherently underactuated control problem. Contemporary and classical magnetorquer detumbling methods do not adequately consider this underactuation, and suffer from poor performance as a result. These controllers can get stuck on an uncontrollable manifold, resulting in long detumbling times and high power consumption. This work presents a novel detumble controller based on a non-monotonic Lyapunov function that predicts the future magnetic field along the satellite's orbit and avoids uncontrollable configurations. In comparison to other controllers in the literature, our controller detumbles a satellite in significantly less time while also converging to lower overall angular momentum. We provide a derivation and proof of convergence for our controller as well as Monte-Carlo simulation results demonstrating its performance in representative use cases. § INTRODUCTION After a spacecraft is deployed on orbit, a common first phase of operation is detumbling. In this phase, the angular velocity of the satellite is reduced from tens of degrees per second to rates that are tolerated by the satellite mission or managed by other onboard control systems. To perform detumbling, the spacecraft must reduce its total angular momentum by one to two orders of magnitude. This is only accomplished by generating external torques, either through expending propellant, or, in low-Earth orbit, with magnetic torque coils (magnetorquers) that exchange momentum with the Earth’s magnetic field. Magnetorquers are appealing because they do not require expending propellant. However, magnetorquers are underactuated: at any instant in time, they only generate torque in a two-dimensional subspace perpendicular to the Earth’s local magnetic-field vector. To prove convergence, most common magnetorquer detumbling controllers, including the classic B-dot and B-cross controllers <cit.>, rely on the motion of the satellite through the Earth’s magnetic field, which makes the magnetic field vector time varying in the orbit frame. While instantaneously underactuated, over the spacecraft's full orbit, full controllability is achieved. In this work, we demonstrate that these classic controllers, and their modern variants, can take many hours to detumble a spacecraft, despite it being possible to detumble much faster and with much less total control effort. For this reason, these controllers are inefficient, wasting precious energy and time during the critical early stages of satellite operation. To mitigate the inefficiencies of the classic magnetic detumbling controllers, we present a novel controller that uses a prediction of the future magnetic field vector, dramatically improving convergence time. The magnetic field prediction is done using only gyroscope and magnetometer sensor measurements; no inertial attitude or position reference is required. The controller is based on a discrete-time non-monotonic Lyapunov function <cit.>, which is able to temporarily increase the angular momentum of the spacecraft, allowing the system to move away from control singularities. We compare our controller against controllers in the literature using a Monte-Carlo simulation of 100 randomly sampled initial conditions. Our controller detumbles a satellite in less time than other controllers; it also converges to lower overall angular rates. Our contributions include: * A unified treatment of the numerous magnetorquer detumbling controllers that exist in the literature * A derivation and analytic proof of convergence of our predictive detumbling controller based on a discrete-time non-monotonic Lyapunov function * Monte-Carlo simulation experiments showing the performance of our predictive controller in comparison to five other controllers in the literature The paper proceeds as follows: In <ref> we discuss prior work on magnetorquer detumbling. In <ref> we present the attitude dynamics of a spacecraft and provide a unified derivation of the five detumbling controllers that we compare ours to. In <Ref> we provide a brief introduction to non-monotonic Lyapunov functions and derive our discrete-time non-monotonic detumbling controller. We present our Monte-Carlo simulation results in <ref>, and we summarize our conclusions and suggest directions for future work in <ref>. § RELATED WORK Magnetorquer detumbling has a long history dating back to the earliest days of space exploration <cit.>. In general, magnetorquer detumbling controllers come in two categories with many variants: B-dot and B-cross. B-dot controllers assume only magnetometer measurements are available. B-cross <cit.> controllers assume both magnetometer and gyroscope measurements are available onboard the spacecraft. As we will show in <ref>, these two categories are related by a simple approximation, and the many variations in the literature reduce to a variety of gains and saturation methods for handling control limits <cit.>. In addition to the magnetic detumbling methods discussed here, there has been significant work on full magnetic attitude control, including the work by Wisniewski <cit.> which models the magnetic field as a periodic system, and more recent work that utilizes numerical optimal control to perform three-axis magnetorquer attitude control <cit.>. Ovchinnikov presents a recent survey of both magnetorquer detumbling and attitude control <cit.>. § BACKGROUND §.§ Attitude Dynamics Let h ∈ℝ^3 be the angular momentum of a spacecraft, B ∈ℝ^3 be the Earth's local geomagnetic field vector at the spacecraft's location, and μ∈ℝ^3 be the dipole moment produced by the spacecraft's magnetorquers. With a magnetic dipole moment as input, a spacecraft's angular momentum dynamics expressed in an inertial reference frame are ḣ = τ = -B ×μ = -B̂μ where τ∈ℝ^3 is the torque on the spacecraft and B̂ is the skew-symmetric cross product matrix, B̂ = [ 0 -B_z B_y; B_z 0 -B_x; -B_y B_x 0 ]. With inertia matrix J ∈ℝ^3 × 3, the inertial angular momentum relates to the inertial angular velocity ω as h = J ω. We make use of the time derivative of the geomagnetic field vector with respect to the spacecraft's body frame, Ḃ^ℬ, and with respect to the inertial frame, Ḃ^𝒩. Both are expressed in body-fixed coordinates. The relationship between these quantities is Ḃ^𝒩 = ω̂ B + Ḃ^ℬ. §.§ Detumbling Control Many of the detumbling control laws found in the literature are variations of a single control law that is derived from the Lyapunov function V = 1/2 h^Th . Taking the time derivative, V̇ = h^T ḣ = -h^TB̂μ. We desire to find μ that minimizes V̇ at every instant in time. To do so, we formulate this as an optimization problem with bound constraints that limit the maximum dipole moment the satellite can produce: <b> μV̇ = -h^TB̂μ -μ_max≤μ̅ ≤ μ_max. opt:control_lyapunov_continuous This optimization problem is a linear program with a closed-form solution in the form of a bang-bang control law: μ = μ_max(ĥ B) , where the function is interpreted element-wise. Bang-bang controllers like <ref> are prone to chattering in the presence of noise, so we replace the hard saturation with a soft saturation, μ = μ_maxtanh(k ĥ B), where the tanh function is, again, interpreted element-wise and k is a tuning parameter. We refer to this control law as the Lyapunov momentum control law. The control law in <ref> is closely related to the classical B-dot and B-cross control laws <cit.>. The B-cross law replaces h with ω and relaxes the bang-bang saturation to a linear feedback law with gain k, μ = k ω̂ B. Avanzini and Giulietti <cit.> propose selecting the B-cross controller gain, k = 2 1/√(a^3 / GM) (1 + sin(ξ_m)) λ_min, where a is the orbit semi-major axis, GM is the Earth's gravitational parameter, ξ_m is the orbit's geomagnetic inclination and λ_min is the minimum eigenvalue of the spacecraft's inertia matrix J. The B-dot law <cit.> modifies <ref> by making the assumption that Ḃ^𝒩 = 0 so ω̂B ≈ -Ḃ^ℬ, resulting in μ = -k Ḃ^ℬ. The B-dot law has the advantage that Ḃ^ℬ is readily estimated from a magnetometer only, so no gyroscope measurements are required for its implementation. However, as ω→ 0, the approximation in <ref> becomes less accurate and the B-dot law tends to converge to larger final momentum. Desouky <cit.> presented two control laws: Their “time-optimal” control law is equivalent to <ref> and their “B-dot Variant” control law inverts <ref> with a regularizing term to solve for ω and substitutes the result into <ref> to obtain the control law, μ = -k B̂ (ϵ I + B̂)^-1Ḃ^ℬ, where 0 < ϵ≪ 1, and ϵ = 1× 10^-6 in practice. Invernizzi and Lovera <cit.> use a projection-based method to compute a time-varying gain for an unsaturated version of <ref>, μ = - k/B^2B̂ h, k = k_1exp(-k_2 | B^T hB (h + ϵ)|). All of the previously discussed methods are closely related and suffer from the same fundamental limitation: when the controlled variable (h or ω) and B are aligned their cross product is zero and the commanded control input goes to zero. The controllers are convergent on long time scales because B is time varying in the orbital frame, so eventually the cross product will no longer be zero. However, they are prone to tracking this uncontrollable subspace, effectively becoming stuck and taking significantly longer to detumble. Consider the B-cross controller in <ref>. If we decompose ω = ω^∥ + ω^⊥, where ω^∥ and ω^⊥ are the components of ω parallel and perpendicular to B, μ = k(ω^∥ + ω^⊥) × B = k ω̂^⊥ B , and ḣ = -B̂μ = -k B̂ (ω̂^⊥ B) = - λω^⊥ , for some λ > 0. Therefore, h is only reduced in the ω^⊥ direction with no change in the ω^∥ direction. <Ref> shows this effect in two simulation runs of the B-cross controller with the same initial conditions and two different gains. The smaller gain was chosen based on <ref> and the larger gain is a factor of 100 larger. At the beginning, the smaller-gain controller decreases h at a slower rate, but ultimately converges sooner because the larger gain causes the controller to get stuck on the uncontrollable subspace where ω and B are parallel. The gain sweep results in <ref> show a similar phenomena occurring with the other controllers. While this phenomenon can be partially mitigated with appropriate tuning, there are no guarantees that the controller will converge without becoming stuck. The result is a longer-than-necessary convergence time and higher-than-necessary energy expenditure. To address the shortcomings of existing controllers, we relax one of their basic constraints: we derive a controller that does not decrease the angular momentum of the spacecraft monotonically, but still maintains a Lyapunov convergence guarantee on average. Intuitively, this allows the controller to trade off increasing the angular momentum instantaneously in exchange for avoiding the uncontrollable subspace, making the angular momentum more controllable in the future. § NON-MONOTONIC CONTROLLER DERIVATION We begin by introducing discrete-time monotonic Lyapunov analysis, then extend it to non-monotonic Lyapunov analysis. The discrete-time dynamical system x_k+1 = f(x_k) with x ∈ℝ^n has a globally asymptotically stable (GAS) equilibrium at x = 0 if there exists a Lyapunov function V(x): ℝ^n →ℝ such that V(x) > 0 ∀ x ≠ 0 V(0) = 0 V_k+1 < V_k ∀ k where we use the notation V_k = V(x_k). It is well known that there is no general method of finding a Lyapunov function that satisfies <ref>, even if the system is GAS. Ahmadi and Parrilo suggest that the monotonic decrease condition in <ref> may be too restrictive, and present several alternative stability theorems that only require V to decrease on average <cit.>. We rely on Theorem 2.1 from their work. It modifies the conditions in <ref> so that <ref> is GAS at x = 0 if there exists a scalar α≥ 0 and a Lyapunov function V:ℝ^n →ℝ such that V(x) > 0 ∀ x ≠ 0 V(0) = 0 α (V_k+2 - V_k) + (V_k+1 - V_k) < 0 ∀ k. The condition in <ref> relaxes <ref> to allow V_k to decrease on average between two timesteps. §.§ Non-Monotonic Detumbling To derive the non-monotonic detumbling controller, we begin with the discrete-time Lyapunov function: V_k = 1/2 h_k^T h_k. This trivially satisfies <ref>, so it remains to design the control input μ such that the non-monotonic Lyapunov condition Δ V = α (V_k+2 - V_k) + (V_k+1 - V_k) < 0 from <ref> is satisfied for α≥ 0. After discretizing the attitude dynamics in time with Euler integration and expanding as shown in the [sec:DV_expansion]Appendix, we find that Δ V = 1/2μ̅^T (Q_1 + α Q_2) μ̅ - (q_1 + α q_2)^Tμ̅ where Q_1, Q_2 ∈ℝ^6× 6 are symmetric positive semi-definite matrices, μ̅∈ℝ^6 = [μ_k^T, μ_k+1^T]^T is the vector of control inputs at k and k+1, and q_1, q_2 ∈ℝ^6. This means that Δ V is convex, and, as we will see in the following, its minimum is less than zero. So, it is possible to find μ̅ such that Δ V < 0, and the non-monotonic Lyapunov conditions of <ref> are satisfied. We wish to find a control law for μ̅ such that Δ V is minimized: <b> μ̅ ΔV = 1/2μ̅^T (Q_1 + αQ_2) μ̅ - (q_1 + αq_2)^Tμ̅ -μ_max≤μ̅ ≤ μ_max opt:minDV. Since Q_1, and Q_2 are positive semi-definite, Δ V is not strictly convex and <ref> has multiple minima. We add 1/2βμ̅^T μ̅ with 1 ≫β > 0 as a regularizing term to make the objective strictly convex. The result is a convex quadratic program that is reliably and quickly solved with a numerical solver. Alternatively, we can make the same simplification as in <ref> and solve the unconstrained minimization problem, enforcing a soft saturation constraint on the result. The optimization is then <b> μ̅ F = 1/2 βμ̅^T μ̅ + 1/2μ̅^T (Q_1 + αQ_2) μ̅ - (q_1 + αq_2)^Tμ̅ opt:minF. We find the analytic solution by taking the gradient of F with respect to μ̅ and setting it to zero. The gradient of F is ∇ F = βμ̅ + (Q_1 + α Q_2) μ̅ - (q_1 + α q_2). Setting to zero and solving for μ̅ gives our control law, μ̅^* = (β I + Q_1 + α Q_2)^-1 (q_1 + α q_2). We now show that this control law satisfies the nonmonotonic Lyapunov decrease condition. Plugging μ̅^* into F and recalling that (I+Q_1 + Q_2) is symmetric, we have F^* = -1/2 (q_1 + α q_2)^T (β I + Q_1 + α Q_2)^-1 (q_1 + α q_2). Since β I + Q_1 + α Q_2 is positive definite, (β I + Q_1 + α Q_2)^-1 is also positive definite, and F^* < 0 for all α, β > 0. The regularizing term in F is always positive in μ̅, so we can conclude that Δ V < 0, which satisfies the nonmonotonic Lyapunov decrease condition in <ref>. §.§ Causal Implementation Examining <ref>, we see that the Q_1 and Q_2 matrices rely on knowledge of B_k+1. This is not causal. However, B_k+1 can be predicted using knowledge of the satellite's orbit and a model of the geomagnetic field. Detumbling is often executed during early operations of a satellite, so orbit knowledge and a computationally expensive geomagnetic field model may not be available. An alternative is to approximate B_k+1 as B_k+1≈ B_k + Δ t Ḃ_k^𝒩. We cannot directly measure Ḃ^𝒩. However, using <ref>, it can be estimated from multiple magnetometer and gyro measurements. §.§ Complete Controller Bringing together the development from the last two sections, the discrete non-monotonic controller is given in <ref>. The input B_k+1 is computed using the approximation in <ref>. On <ref>, we normalize the values of B_* to avoid numerical issues and ensure consistency of performance across the wide range of geomagnetic field magnitudes a spacecraft will experience. alg:bar_balg:q_2 set up the problem components and follow from the derivation in the [sec:DV_expansion]Appendix. We compute μ̅ on <ref> by solving a linear system. Finally, on <ref> we perform a soft saturation of the computed control output and rescale it to satisfy the satellite's control limits. § SIMULATION EXPERIMENTS All simulations are performed in a 12-degree-of-freedom orbital-and-attitude-dynamics simulation. All code is available on GitHub[ https://github.com/RoboticExplorationLab/non-monotonic-detumbling/github.com/RoboticExplorationLab/non-monotonic-detumbling]. The simulation environment relies on the open-source SatelliteDynamics.jl[https://sisl.github.io/SatelliteDynamics.jl/latest/sisl.github.io/SatelliteDynamics.jl] orbital dynamics package and includes perturbations due to J2 and atmospheric drag. The attitude dynamics include orbit-coupled drag torques. To accurately model the geomagnetic field, we use the International Geomagnetic Reference Field (IGRF) <cit.>. The spacecraft properties used for the simulations are given in <ref>; they reflect the properties for a 1.5U CubeSat with printed circuit board magnetorquers embedded in the solar panels. Noisy sensor measurements and a randomly initialized constant gyro bias are also included in the simulation; the noise parameters are representative of low-cost micro-electromechanical (MEMS) gyro and magnetometer hardware. §.§ Gain Sweep Study Each of the controllers is sensitive to its tuning parameter k. To provide meaningful comparisons between controllers, each controller needs to be tuned to perform in the best possible manner. To do so, we simulate the performance impact of each controller's gain, sweeping it over several orders of magnitude. The results of this study are shown in <ref>. The solid green line in <ref> is the gain that was used for the Monte-Carlo simulation experiment; this gain was chosen as a tradeoff between fast convergence and avoiding high gains that lead to the controller getting stuck in their uncontrollable subspace. <Ref> shows the final momentum of each of the trajectories from <ref>, plotted against their corresponding gains. This allows us to more clearly see the performance of the controllers when under- and over-tuned. The Lyapunov momentum, B-dot, projection-based, and B-cross controllers fail to converge when the gain is too low or too high. The convergence failure with high gains is another example of the controllers getting stuck on their uncontrollable manifold as discussed in <ref>. This suggests that tuning these controllers to converge consistently could be challenging. In contrast, the B-dot variant shows consistent convergence once the gain is at or above 4.00× 10^-1 but it converges to the highest final momentum of all of the controllers when well tuned. Our discrete non-monotonic controller remains well under the 1% threshold for all investigated gains, suggesting it is more robust to poor tuning than the other controllers. §.§ Monte-Carlo Simulation Experiments For each controller, the Monte-Carlo simulation starts from each of 100 randomly sampled initial states. Initializing each controller with the same random initial states allows for fair comparison. The ranges and values of the Monte-Carlo initial conditions are sampled from a uniform distribution with minimum and maximum values given by the ranges in <ref>. They represent random circular orbits at a fixed altitude and random vehicle axis of rotation with a fixed initial angular velocity magnitude. To avoid the lack of controllability all magnetorquer control systems experience at near equatorial inclinations <cit.>, we restricted the orbital inclinations to [20, 160). Since most satellites in low-Earth orbit operate at high inclinations we believe these results translate well to real orbital configurations. The controller parameters and corresponding equation reference are shown in <ref>. The simulation results are shown in <ref>. <Ref> shows a histogram of the time it takes for the spaceraft momentum to be reduced to 1% of its initial value. Many of the simulation runs of the common control methods do not converge to this 1% threshold within two hours. However, our discrete non-monotonic controller converges within the two hour simulation period for all initial conditions tested, with the majority of the initial conditions converging in one hour or less. The reason for this can be seen in <ref>; it shows the time history of the momentum magnitude for the 100 Monte-Carlo simulation runs, as well as an average time history of these runs. The discrete non-monotonic controller exhibits significantly different behavior than the five other controllers, increasing in momentum one or more times before finally converging to zero. <Ref> shows the final angular momentum magnitudes. The discrete non-monotonic controller has the smallest maximum final angular momentum and the smallest median final angular momentum. Since the controllers were all stopped at two hours regardless of convergence, the median provides a better point of comparison than the maximum. The discrete non-monotonic, Lyapunov momenum and projection-based controllers have median momentum less than 0.1 μNms while the B-dot, B-dot variant, and B-cross controllers all result in a median final angular momentum that is more than 10 times larger. The B-dot and B-dot variant controllers also have a minimum final angular momentum that is more than 10 times the other controllers. This suggests that angular velocity information from a gyroscope is useful in achieving a smaller final angular momentum. § CONCLUSIONS The many variants of B-dot and B-cross controllers in the literature differ primarily in how the controller gains and saturation are selected. Their performance is similar, with each having the potential to get stuck in the uncontrollable subspace where the controlled state (angular momentum or angular velocity) aligns with the magnetic field vector. Recent magnetorquer detumbling controllers, such as the projection-based controller <cit.>, improve on this failure mode but still exhibit similar worst-case convergence. The novel non-monotonic Lyapunov magnetorquer detumbling control law we have presented is a more significant departure from the classical B-dot and B-cross control laws: our control law implicitly predicts the future controllability of the system and avoids putting the satellite in an uncontrollable state. In our Monte-Carlo simulations, it achieves detumbling times that are more than twice as fast as the other controllers while operating with realistic sensor noise and gyro bias. In addition, our control law is straightforward to tune and less sensitive to tuning than other control laws. To put our novel control law into practical use, a high-quality estimate of the time-derivative of the geomagnetic field is needed. Future work will focus on generating this estimate and analyzing the full closed-loop performance of the magnetic field estimator and control law in combination. From <ref>, we have Δ V = α (V_k+2 - V_k) + (V_k+1 - V_k) < 0 , and V_k = 1/2 h_k^T h_k. Through the rest of this section we drop the subscript k and use [·]_0 = [·]_k, [·]_1 = [·]_k+1, [·]_2 = [·]_k+2 for clarity. We approximate the discrete time dynamics in <ref> using Euler integration, so that h_1 ≈ h_0 + Δ t ḣ_0 = h_0 + Δ t τ_0 = h_0 + Δ t (μ_0 × B_0) and h_2 ≈ h_1 + Δ t ḣ_1 = h_0 + Δ t τ_0 + Δ t τ_1 = h_0 + Δ t (μ_0 × B_0) + Δ t (μ_1× B_1). Substituting, V_1 = 1/2 h_1^T h_1 = 1/2( h_0^T h_0 + 2Δ t h_0^Tτ_0 + Δ t^2 τ_0^T τ_0 ) and V_2 = 1/2 h_2^T h_2 = 1/2( h_0^T h_0 + 2Δ t h_0^Tτ_0 + 2Δ t h_0^Tτ_1 + Δ t^2 τ_0^T τ_0 + 2Δ t^2 τ_0^T τ_1 + Δ t^2 τ_1^T τ_1) so V_1 - V_0 = 1/2( h_0^T h_0 + 2Δ t h_0^Tτ_0 + Δ t^2 τ_0^T τ_0 ) - 1/2 h_0^T h_0 = Δ t h_0^Tτ_0 + 1/2Δ t^2 τ_0^T τ_0 = -Δ t h_0^TB̂_0 μ_0 + 1/2Δ t^2 μ_0^T B̂_0^T B̂_0 μ_0 and V_2 - V_0 = 1/2( h_0^T h_0 + 2Δ t h_0^Tτ_0 + 2Δ t h_0^Tτ_1 + Δ t^2 τ_0^T τ_0 + 2Δ t^2 τ_0^T τ_1 + Δ t^2 τ_1^T τ_1) - 1/2 h_0^T h_0 = Δ t h_0^Tτ_0 + Δ t h_0^Tτ_1 + 1/2Δ t^2 τ_0^T τ_0 + Δ t^2 τ_0^T τ_1 + 1/2Δ t^2 τ_1^T τ_1 = -Δ t h_0^T B̂_0 μ_0 - Δ t h_0^T B̂_1μ_1 + 1/2Δ t^2 μ_0^T B̂_0^T B̂_0 μ_0 + Δ t^2 μ_0^T B̂_0^T B̂_1μ_1 + 1/2Δ t^2 μ_1^T B̂_1^T B̂_1μ_1 where we used the identities τ = μ× B = - B ×μ = - B̂μ τ^T τ = (μ× B)^T (μ× B) = (- B ×μ)^T (-B ×μ) = (- B̂μ)^T (- B̂μ) = μ^T B̂^T B̂μ. Bringing these terms together, we have Δ V ≜α (V_2 - V_0) + (V_1 - V_0) =α(-Δ t h_0^T B̂_0 μ_0 - Δ t h_0^T B̂_1μ_1 + 1/2Δ t^2 μ_0^T B̂_0^T B̂_0 μ_0 + Δ t^2 μ_0^T B̂_0^T B̂_1μ_1 + 1/2Δ t^2 μ_1^T B̂_1^T B̂_1μ_1) -Δ t h_0^TB̂_0 μ_0 + 1/2Δ t^2 μ_0^T B̂_0^T B̂_0 μ_0 = α1/2Δ t^2 μ̅^T B̅B̅^T μ̅ - αΔ t h_0^T B̅^T μ̅ + 1/2Δ t^2 μ̅^T Z B̅B̅^T Z μ̅ - Δ t h_0^T B̅^T Z μ̅ = 1/2μ̅^T Q_1 μ̅ + 1/2αμ̅^T Q_2 μ̅ - q_1^Tμ̅ - α q_2^Tμ̅ = 1/2μ̅^T (Q_1 + α Q_2) μ̅ - (q_1 + α q_2)^Tμ̅ where we defined μ̅ = [ μ_0; μ_1 ]∈ℝ^6, B̅ = [ B̂_0^T; B̂_1^T ]∈ℝ^6×3 Z = [ I 0; 0 0 ]∈ℝ^3 × 3 Q_1 = Δ t^2 Z B̅B̅^T Z Q_2 = αΔ t^2 B̅B̅^T q_1 = Δ t (h_0^T B̅^T Z)^T q_2 = αΔ t (h_0^T B̅^T)^T. Since B̅B̅^T is symmetric and rank(B̅B̅^T) = rank(B̅) ≤ 3 < 6, Q_1 and Q_2 are symmetric and positive semi-definite. This work was supported by the Department of Defense National Defense Science and Engineering Graduate Fellowship (NDSEG) and by NASA under agreement 80NSSC21K0446. The authors would also like to thank Davide Invernizzi for his correspondence and feedback. It significantly improved this paper. IEEEtran Jacob Willisfigs/headshots/jacob_headshot.jpg is a PhD candidate in the Robotics Institute at Carnegie Mellon University. He is an NDSEG Fellow and received a BS and MS in Electrical and Computer Engineering from Brigham Young University in 2019 and 2021. His research interests include applications of numerical optimal control for autonomous aerospace systems, with recent work on attitude and formation control of small satellites. Paulo Fischfigs/headshots/paulo_headshot.jpg is a PhD candidate in the Robotics Institute at Carnegie Mellon University. He has previous experience working at the German Aerospace Center (DLR) and got his Mechanical Engineering degree from the University of São Paulo in 2020. His interests include optimal state estimation for space systems and optimal control, with recent work on satellite orbit determination. Aleksei Seletskiyfigs/headshots/losha_headshot.jpg is a Junior in Computer Science at Carnegie Mellon Uiversity. His research interests include flight software, state estimation, and optimal control for satellite systems. Zachary Manchesterfigs/headshots/zac_headshot.jpeg is an assistant professor in the Robotics Institute at Carnegie Mellon University and founder of the Robotic Exploration Lab. He received a PhD in aerospace engineering in 2015 and a BS in applied physics in 2009, both from Cornell University. His research interests include control and optimization with application to aerospace and robotic systems with challenging nonlinear dynamics.
http://arxiv.org/abs/2407.01931v1
20240702035620
Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images
[ "Krithika Iyer", "Shireen Y. Elhabian" ]
cs.CV
[ "cs.CV" ]
Iyer and Elhabian Scientific Computing and Imaging Institute, University of Utah, UT, USA Kahlert School of Computing, University of Utah, UT, USA krithika.iyer@utah.edu shireen@sci.utah.edu Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images Krithika Iyer1,2 Shireen Y. Elhabian1,2 July 8, 2024 ========================================================================= § ABSTRACT The study of physiology demonstrates that the form (shape) of anatomical structures dictates their functions, and analyzing the form of anatomies plays a crucial role in clinical research. Statistical shape modeling (SSM) is a widely used tool for quantitative analysis of forms of anatomies, aiding in characterizing and identifying differences within a population of subjects. Despite its utility, the conventional SSM construction pipeline is often complex and time-consuming. Additionally, reliance on linearity assumptions further limits the model from capturing clinically relevant variations. Recent advancements in deep learning solutions enable the direct inference of SSM from unsegmented medical images, streamlining the process and improving accessibility. However, the new methods of SSM from images do not adequately account for situations where the imaging data quality is poor or where only sparse information is available. Moreover, quantifying aleatoric uncertainty, which represents inherent data variability, is crucial in deploying deep learning for clinical tasks to ensure reliable model predictions and robust decision-making, especially in challenging imaging conditions. Therefore, we propose , a unified model that predicts 3D correspondences from sparse imaging data. It leverages a teacher network to regularize feature learning and quantifies data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variances. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM. § INTRODUCTION Understanding morphological variations influenced by pathology, gender, and age is crucial for personalized treatment strategies in precision medicine, facilitating fast diagnosis and treatment <cit.>. Statistical Shape Modeling (SSM) is pivotal in medical image analysis, enabling the identification of morphological variations and quantitative assessment of geometric variability across populations. SSM applications include lesion screening, surgical planning, implant design <cit.>, and studying disease progression <cit.>. SSM parameterizes shapes into numerical vectors for statistical analysis. Methods for shape parameterization include implicit representations (e.g., deformation fields <cit.>, level set methods <cit.>) and explicit representations such as an ordered set of landmarks or correspondence points (aka point distribution models, PDMs), which describe anatomically equivalent points across samples. PDMs are favored for their ease of interpretation, computational efficiency, and noise tolerance. Correspondences in SSM can be established manually or automatically by minimizing objective functions <cit.>. Traditional methods are often complex, computationally demanding, and require anatomical expertise, making them inadequate for large datasets. Deep learning models <cit.> simplify the process by training directly from unsegmented images, but they still depend on computationally derived PDMs for supervision, which can bias and limit the models. High-quality medical images are crucial for accurate shape models, but capturing dense, high-resolution images is challenging and costly. Sparse imaging, with limited data points or slices, arises from acquisition time constraints, patient comfort, radiation dose considerations <cit.>, or technical limitations <cit.>. Enhancing image resolution through post-acquisition resampling can reduce diagnostic accuracy, making it essential to develop models that extract meaningful information from sparse imaging content. Additionally, processing dense, high-resolution imaging data requires significant computational resources and time. Sparse imaging reduces data size and complexity, facilitating efficient shape modeling <cit.> and benefiting real-time applications like intraoperative guidance or rapid diagnostic assessments. Uncertainty quantification is crucial in clinical applications to avoid overconfident and unreliable estimates. Aleatoric uncertainty (data dependent), inherent in sparse imaging due to noise and variability, must be accounted for in SSM methods to produce robust and reliable shape models. Quantifying uncertainty informs clinicians about the potential risks and limitations of the model's outcomes. To address these challenges, we propose the Sparse Image-base probabilistic Correspondence Network () for inferring 3D correspondences from sparse, unsegmented medical images with incorporated aleatoric uncertainty estimates. Our model leverages the student-teacher framework from SCorP <cit.> to learn a shape prior, which regularizes correspondence prediction and ensures anatomical accuracy in reconstructed shapes. Notably, our approach does not require ground PDMs for supervision, effectively managing variability and noise in sparse imaging data. § RELATED WORK Various methods are used in shape analysis to establish correspondences. Non-optimized methods involve manual annotation and warping landmarks using registration techniques <cit.>, producing inconsistent results for larger populations. Parametric methods like SPHARM-PDM <cit.> use fixed geometrical bases for pairwise correspondences but struggle with complex shapes. Group-wise non-parametric approaches, such as particle-based shape modeling (PSM) <cit.> and minimum description length (MDL) <cit.>, consider cohort variability and optimize data-driven objectives. Deep learning models simplify conventional SSM pipelines by performing supervised correspondence prediction directly from unsegmented images (TL-DeepSSM and DeepSSM <cit.>). In clinical applications, uncertainty quantification is essential for evaluating tool reliability. Recent advancements include aleatoric (data-dependent) and epistemic (model-dependent) uncertainty estimation. Aleatoric uncertainty is modeled by a probability distribution over the outputs, while epistemic uncertainty is captured using Bayesian neural networks <cit.> or ensemble methods. Uncertain DeepSSM <cit.> includes both types of uncertainty estimation but, it relies on a shape prior in the form of a supervised latent encoding pre-computed using principal component analysis (PCA). Similarly, other models were proposed for probabilistic 2D surface reconstruction using PCA scores as a prior <cit.>, which was also extended to probabilistic 3D surface reconstruction from sparse 2D images <cit.>. Although these approaches provide shape segmentation with aleatoric uncertainty measures, they do not offer a shape representation readily usable for population-level statistical analysis. VIB-DeepSSM <cit.> relaxes the PCA assumption using a variational information bottleneck (VIB) <cit.> for latent encoding learning, improving aleatoric uncertainty estimation and generalization, but cannot measure epistemic uncertainty fully. The fully Bayesian BVIB-DeepSSM <cit.> addresses this by quantifying both uncertainties and predicting probabilistic shapes from images. But BVIV-DeepSSM and VIB-DeepSSM continue to rely on established PDM for supervision. Recent models like FlowSSM <cit.>, Point2SSM <cit.>, Mesh2SSM <cit.>, and SCorP <cit.> predict correspondences from various data modalities without requiring PDMs for supervision. Particularly, SCorP <cit.> incorporates a shape prior learned from surface meshes in a student-teacher framework for regularizing feature learning from images without supervised PDM loss, but these approaches lack uncertainty estimation. Existing methods for probabilistic correspondence prediction face limitations such as imposing a linear relationship between latent and output spaces and reliance on predefined PDMs for training. Additionally, sparse data is not utilized for building shape models. The proposed framework aims to directly predict correspondences and estimate uncertainty from sparse, unsegmented images without predefined PDMs during training. § BACKGROUND Our work builds upon the model SCorP <cit.>. This section provides an overview of SCorP, setting the stage for our proposed method in Section <ref>. SCorP Overview Consider a training dataset S = {S_1, S_2, …, S_N} comprising N aligned surface meshes, and their corresponding aligned volumetric images, I = {I_1, I_2, …, I_N}. Each surface mesh S_j = (V_j, E_j) consists of vertices V_j and edge connectivity E_j. The primary objective of SCorP is to establish a shape prior (teacher) by predicting a set of M correspondence points C_j^S = {𝐜_j(1), 𝐜_j(2), …, 𝐜_j(M)} with 𝐜_j(m)∈R^3, which accurately represent the anatomy described by surface mesh S_j. This shape prior is then used to guide the image encoder (student) in learning image representation z^I_j conducive to predicting a corresponding set of points C_j^I = {𝐜_j(1), 𝐜_j(2), …, 𝐜_j(M)} directly from the associated image I_j. SCorP's model architecture Fig <ref>.A § PROPOSED MODEL Our goal is to predict probabilistic 3D correspondence from sparse imaging. Specifically, we consider an input set of images I = {I_1, I_2, …, I_N}, where each image I_j consists of axial, coronal, and sagittal orthogonal slices (I_j = {I_j^AX, I_j^SG, I_j^CR}). To accommodate sparse imaging, we modify the student branch image encoder using three separate 2D CNNs to extract features from the three orthogonal slices. These features are concatenated and passed through an image feature aggregator network (fully connected layers) to predict a single latent vector z_j^I representing the entire sample. This modification ensures that we can adopt the student-teacher framework and use the same training strategy proposed by SCorP <cit.>. The modified student branch for is shown in Fig <ref>.B. Similar to SCorP, comprises a teacher network and a student network. The teacher network includes a surface autoencoder and an implicit field decoder. The surface autoencoder learns a low-dimensional z_j^S, permutation invariant representation of each surface mesh using dynamic graph convolution with EdgeConv blocks <cit.>, while the IM-NET decoder <cit.> uses this latent representation z_j^S to predict a set of correspondence points C_j^S, ensuring consistency across the dataset by transforming a template point cloud to match each sample. The student network consists of the modified image encoder branch that learns a compact representation z_j^I for sparse image slices, capable of predicting a set of correspondence points C_j^I guided by the shape prior from the teacher network. Training occurs in three phases similar to SCorP: (a) surface branch training to develop the shape prior where surface autoencoder and implicit decoder are jointly trained to minimize loss ℒ_S = ∑_j=1^N [ℒ_CD(V_j,C_j^S) + αℒ_MSE(V_j,V̂_̂ĵ) ] where V̂_̂ĵ are the reconstructed vertex locations and α is the weighting parameter, (b) image branch embedding alignment to align image encoder features with those of the surface encoder with loss function ℒ_EA = 1/N∑_j=1^N[ | q_ϕ(z^S_j|S_j) - f_γ (z^I_j|I_j)|^2 ] and (c) image branch prediction refinement to improve correspondence prediction accuracy with loss combination of ℒ_PR + ℒ_EA where ℒ_PR =∑_j=1^N ℒ_L_2 CD(V_j,C_j^I) Uncertainty Estimation: To incorporate probabilistic correspondence prediction for aleatoric uncertainty estimation, we propose making the student branch image encoder probabilistic. The encoder, f_γ, (comprising 3D convolutional and densely connected layers for full images Fig <ref>.C and separate image slice encoder and image feature aggregator for sparse images Fig <ref>.B) maps the input image I_j to a Gaussian latent distribution: 𝒩(z^I|μ_z^I,logσ_z^I). Posterior samples z_j^I are acquired from this predicted latent distribution using the reparameterization trick to enable gradient calculation. This modification captures aleatoric uncertainty as the variance of the p(C_j^I|z^I) distribution, computed by sampling multiple latent encodings from 𝒩(z^I|μ_z^I,logσ_z^I) and passing them through the implicit decoder to get a sampled distribution of predictions. A Gaussian distribution is estimated from these samples: 𝒩(C_j^I|μ, logσ). The estimated σ captures the aleatoric uncertainty. § DATASET AND EVALUATION §.§ Datasets We selected the left atrium and liver datasets for our experiments due to their highly variable shapes. The left atrium (LA) dataset consists of 923 anonymized Late Gadolinium Enhancement (LGE) MRIs from distinct patients, manually segmented by cardiovascular medicine experts. Post-segmentation, the images were cropped around the region of interest. The AbdomenCT-1K liver dataset <cit.> includes 1132 3D CT scans and their corresponding liver segmentations. After visually assessing the quality of the images and segmentations, we selected 833 samples. These images were aligned and cropped around the region of interest. We randomly split both datasets into training, validation, and test sets as 80%/10%/10%. More details about the datasets, hyperparameters of the models, and training details are provided in the supplementary material. §.§ Metrics Chamfer Distance (CD): Measures the average bidirectional distance between points in two sets (V_j and C_j^I), assessing dissimilarity between them. Point-to-Mesh Distance (P2M): Calculates the sum of point-to-mesh face distance and face-to-point distance for the predicted correspondences (C_j^I) and the mesh faces defined by vertices and edges (V_j, E_j). Surface-to-Surface (S2S) Distance: Measured between the original surface mesh and the generated mesh from predicted correspondences. To obtain the reconstructed mesh, correspondences are mapped to the mean shape, and the warp between the points is applied to its mesh. SSM Metrics used to evaluate correspondence <cit.>: Compactness: Represents the training data distribution with minimal parameters, measured by the number of PCA modes needed to capture 95% of the variation in correspondence points. Generalization: Evaluates how well the SSM extrapolates from training to unseen examples, gauged by the reconstruction error (L2) between held-out and training SSM-reconstructed correspondence points. Specificity: Measures the SSM's ability to generate valid instances of the trained shape class, quantified by the average distance between sampled SSM correspondences and the nearest existing training correspondences. Aleatoric Uncertainty: Reflects inherent data noise and variability, expected to correlate with P2M error (high Pearson r). Aids in out-of-distribution detection, indicating model reliability. § RESULTS We compare five variants using different input types: full volume (like SCorP), sparse images (orthogonal slices: axial, coronal, sagittal), and individual slices (axial, sagittal, coronal). This comparison identifies the most effective approach for probabilistic correspondence prediction. As shown in Fig <ref>, the full volume model outperforms others across CD, P2M, and S2S metrics. However, the proposed orthogonal slices model demonstrates competitive performance and is the second-best for both datasets. Notably, the axial slice model performs similarly to the three-slice model for the LA dataset, likely due to its effective capture of essential LA shape features such as length and appendage. Additionally, training the orthogonal slices model is 1.5x faster than the full volume model which highlights the utility of using sparse imaging for SSM applications. All models exhibit similar performance in SSM metrics (generalization, specificity, and compactness) as shown in Fig <ref>, indicating they capture significant shape variability while maintaining high fidelity to the original shapes. The compactness plots suggest an efficient representation of population variance with fewer PCA modes. Specificity and generalization metrics confirm that generates valid instances and effectively extrapolates to unseen data, regardless of input type. The full-volume model shows the best SSM metrics for the liver dataset, likely due to higher image quality and greater variation within the dataset. We experimented with the orthogonal slice dataset obtained from volumes of varying thickness levels to demonstrate the utility of using sparse images. As shown in Fig <ref>.B, the performance metrics indicate that the model performs similarly across these versions, providing consistent aleatoric estimates, as evidenced by the r-scores in Fig <ref>.C. Fig <ref> illustrates the point-wise correlation between predicted uncertainty values and P2S distance error across the test set. Higher uncertainty is expected for points further from the true shape surface. The Pearson R correlation coefficients show that using orthogonal images does not degrade uncertainty estimation, as indicated by the similar average uncertainty heatmaps. However, individual slices reduce uncertainty calibration due to information loss. The spatial correlation between P2S error and uncertainty heatmaps highlights the value of probabilistic frameworks in assessing prediction reliability. For the LA dataset, the correlation between P2S error and aleatoric uncertainty using the axial image is comparable to that of orthogonal and full-volume images, consistent with the SSM metrics in Fig <ref>. Using the method described in BVIB-DeepSSM <cit.>, we selected outlier cases for the test set based on an outlier degree computed from images and meshes. This resulted in a test set with 40 shape outliers, 78 image outliers, and 92 randomly selected inliers. Fig <ref>.A shows that predicted uncertainty is higher for outlier test sets, particularly extreme shape outliers, as illustrated by the examples of outliers which display high variability and differ from the inliers significantly. For the liver dataset, we examined the correlation between sample-wise aleatoric estimates and sample-wise P2M distance. Fig <ref>.D highlights three samples with high P2M distance and low uncertainty, indicating confident but incorrect predictions. These errors are attributed to poor contrast and lack of clear organ definition in the image slices of these outliers, as observed in the example outlier image slices. § CONCLUSION demonstrates substantial potential by providing a straightforward approach for directly inferring probabilistic correspondences from raw images without needing pre-optimized shape models. Leveraging shape priors from various representations and integrating aleatoric uncertainty quantification methods, effectively accommodates sparse images, significantly enhancing its reliability and applicability in clinical settings. The current model relies on precise image alignment for optimal performance; future work on developing robust alignment algorithms or alignment-free methods holds promise for increasing its versatility across diverse datasets and clinical scenarios. This streamlined approach to shape model generation marks a significant step forward in personalized medicine and clinical decision support, promising substantial progress and broader applicability. § ACKNOWLEDGEMENTS This work was supported by the National Institutes of Health under grant numbers NIBIB-U24EB029011, NIAMS-R01AR076120, and NHLBI-R01HL135568. We thank the University of Utah Division of Cardiovascular Medicine for providing left atrium MRI scans and segmentations from the Atrial Fibrillation projects and the ShapeWorks team. splncs04 § APPENDIX §.§ Dataset Details * Left Atrium (LA) * 923 anonymized Late Gadolinium Enhancement (LGE) MRIs from distinct patients. * Manually segmented by cardiovascular medicine experts at the (anonymous) Cardiovascular Medicine. * The endocardial wall was used to cut off pulmonary veins. * Spatial resolution: 0.65 × 0.65 × 2.5 mm^3. * Images were cropped around the region of interest and downsampled by a factor of 0.8. * Resulting input image size: 166 × 120 × 125. * Liver * Dataset includes CT scans and segmentations of liver, kidney, spleen, and pancreas. * 1132 3D CT scans from various public datasets with segmentation verified and refined by experienced radiologists. * Used CT scans and corresponding liver segmentations for experiments. * CT scans have resolutions of 512 × 512 pixels with varying pixel sizes and slice thicknesses between 1.25-5 mm. * Utilized 833 samples after visual quality assessment of images and segmentations. * Images were cropped around the region of interest using segmentations and downsampled by a factor of 3.5. * Downsampled volume size: 144 × 156 × 115 with isotropic voxel spacing of 2 mm. §.§ Hyperparamters All models were trained on NVIDIA GeForce RTX 2080 Ti. §.§ Architecture * Orthogonal Encoder: The Orthogonal Encoder processes three orthogonal 2D slices (axial, sagittal, and coronal) from a 3D medical image volume. * Slice Encoders: Separate 2D convolutional backbones are used for each of the three slices: axial, sagittal, and coronal. Each backbone processes its respective slice using Conv2d layers with 5× 5 filters and the following numbers of filters: [12, 24, 48, 96, 192]. Batch normalization and ReLU activation functions are applied after each Conv2d layer, with max-pooling layers incorporated to reduce spatial dimensions. * Fully Connected Layer: The combined features are passed through a fully connected (FC) layer stack. This stack includes two linear layers: [256 × 3 → 256] and [256 →], with a Parametric ReLU (PReLU) activation function in between. * Output: * If the encoder is deterministic, the output is directly the features from the FC layer. * If the encoder is non-deterministic, the output is split into mean and log variance for Gaussian sampling, producing the required number of samples. * 3D Image encoder: The encoder architecture utilizes Conv2d layers with 5× 5 filters and the following numbers of filters: [12, 24, 48, 96, 192]. After each Conv2d layer, batch normalization and ReLU activation functions are applied. Max pooling layers are incorporated to reduce spatial dimensions. The feature maps are then flattened and passed to the fully connected layers. The fully connected (FC) layer stack consists of linear layers with different input and output feature dimensions: [193536 -> 384], [384 -> 96], [96 -> 256]. Each linear layer is followed by a Parametric ReLU (PReLU) activation function. * 2D Orthogonal Slice Image encoder: * Image Feature Aggregator: * Surface Autoencoder: We use the DGCNN_semseg_s3dis model from the original DGCNN https://github.com/antao97/dgcnn.pytorch/Github repository. * IM-Net: We use the original implementation of IM-Net from the https://github.com/czq142857/IM-NET-pytorchGithub repository. §.§ SSM Metrics * Compactness: We quantify compactness as the number of PCA modes that are required to capture 95% of the total variation in the output training cohort correspondence points. * Specificity: We quantify specificity by randomly generating J samples from the shape space using the eigenvectors and eigenvalues that capture 95% variability of the training cohort. Specificity is computed as the average squared Euclidean distance between these generated samples and their closest training sample. S= ∑_C∈C_generated ||C - C_train||^2 * Generalization: We quantify generalization by assessing the average approximation errors across a set of unseen instances. Generalization is defined as the mean approximation errors between the original unseen shape instance and reconstruction of the shape constructed using the raining cohort PCA eigenvalues and vectors that preserve 95% variability. G = ∑_j=1^U||C_j - Ĉ_̂ĵ||_2^2 for J unseen shapes.
http://arxiv.org/abs/2407.02177v1
20240702113243
Minsum Problem for Discrete and Weighted Set Flow on Dynamic Path Network
[ "Bubai Manna", "Bodhayan Roy", "Vorapong Suppakitpaisarn" ]
cs.DS
[ "cs.DS", "cs.DM" ]
IIT Kharagpur, Kharagupur, India The University of Tokyo, Tokyo, Japan Minsum Problem for Discrete Flow on Dynamic Path Network Minsum Problem for Discrete and Weighted Set Flow on Dynamic Path NetworkThis research was partly conducted during Bubai Manna's and Bodhayan Roy's visit to The University of Tokyo. The visit was hosted by Prof. Reiji Suda and was supported by the JST Sakura Science Program. Vorapong Suppakitpaisarn was partially supported by KAKENHI Grant 23H04377. The authors would like to thank the reviewers for their comments, which significantly improved this paper. Bubai Manna 1 Bodhayan Roy 1 Vorapong Suppakitpaisarn2 Received 2024; accepted 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this research, we examine the minsum flow problem in dynamic path networks where flows are represented as discrete and weighted sets. The minsum flow problem has been widely studied for its relevance in finding evacuation routes during emergencies such as earthquakes. However, previous approaches often assume that individuals are separable and identical, which does not adequately account for the fact that some groups of people, such as families, need to move together and that some groups may be more important than others. To address these limitations, we modify the minsum flow problem to support flows represented as discrete and weighted sets. We also propose a 2-approximation pseudo-polynomial time algorithm to solve this modified problem for path networks with uniform capacity. § INTRODUCTION Flow problems on dynamic graphs <cit.> are considered by many researchers (e.g. <cit.>) because of many reasons. One of the reasons is their relevance in finding evacuation routes during emergencies such as earthquakes or fires <cit.>. In those applications, we aim to move persons in the ways that they arrive at aiding facilities as soon as possible. A common objective function for those problems is minmax, which aims to minimize the time until all persons arrive at facilities. In this work, however, we consider another common objective function called minsum, which aims to minimize the summation of time that each individual needs for their trips. In Figure <ref>a, there are 4 people at node 1 and 6 people at node 2. These 10 people need to be transported to the aid facility at node 3. Both edges have capacity constraints: a maximum of 3 people can be moved on the edge between nodes 1 and 2 in one unit of time, and a maximum of 4 people can be moved on the edge between nodes 2 and 3 in one unit of time. It takes 1 unit of time to travel from node 1 to node 2 and 2 units of time to travel from node 2 to node 3. At time 1, we can move 3 people from node 1 to node 2 and 4 people from node 2 to node 3. This leaves 1 person at node 1, 5 people at node 2, and 4 people in the middle of the edge between nodes 2 and 3. At time 2, the remaining person at node 1 is moved to node 2, and 4 people at node 2 are moved to node 3. This results in 4 people from node 2 arriving at the facility within 2 units of time, 4 people arriving within 3 units of time, and 2 people arriving within 4 units of time. The maximum time was 4 units of time. The summation of times was 4 × 2 + 4 × 3 + 2 × 4 = 28. The move which we discussed here minimized both the maximum time and the summation. It has been shown that both objective functions of flow problems can be solved using time-expanded networks <cit.>. However, these temporal graphs can be exponentially large in relation to the input size, making the algorithm pseudo-polynomial. For minmax problems, polynomial-time algorithms have been developed for paths <cit.> and trees <cit.>. There are also FPTAS for general graphs when the number of facilities is constant <cit.>. In contrast, minsum problems have only been shown to have polynomial-time algorithms for path graphs <cit.>. All known algorithms assume that individuals are distinct and identical, meaning that we can move any number of people over a particular edge as long as the total number does not exceed the edge's capacity. However, this may not always be possible in practice. For example, some groups of people, such as families, must be moved together, and some groups may require emergency aid and should therefore be given higher priority. These considerations must be taken into account when determining how to move people from one location to another. §.§ Our Contributions In short, we modify the minsum flow problem to support flows represented as discrete and weighted sets. We also propose a 2-approximation pseudo-polynomial time algorithm to solve this modified problem for path networks with uniform capacity. We illustrate the ideas of the modified problem in the following example. In Figure 1b, there are two groups of people at node 1, each with 2 people. The weight of the first group is 5, while the weight of the second group is 3. There are also two groups of 3 people at node 2, with weights of 5 and 3, respectively. We refer to the group with size S_ij and weight w_ij as G_ij. At time 1, we move group G_11 from node 1 to node 2 and group G_21 from node 2 to node 3. Before time 2, group G_12 is at node 1, groups G_11 and G_22 are at node 2, and group G_21 is in the middle of the edge between nodes 2 and 3. At time 2, we move group G_12 to node 2 and group G_11 to node 3. At time 3, we move group G_12 from node 2 to node 3, and at time 4, we move group G_22. As a result, group G_21 arrives at time 2, group G_11 arrives at time 3, group G_12 arrives at time 4, and group G_22 arrives at time 5. The weighted summation of arrival time is then 2w_21 + 3w_11 + 4w_12 + 5w_22 = 52. It is clear that the modified problem is harder than the original version. Indeed, we can show that it is NP-hard by a reduction to the partition problem. The formal definition of this problem with its NP-hardness proof can be found in Section 2. We discuss in Section 3 that when we have two nodes, our problem is equivalent to the weighted minsum bin packing problem <cit.>. To support the case that we have more than two nodes, we need to derive a bin-packing algorithm that can support items with different arrival times. Suppose that t_i is the arrival time of item i. The item cannot be packed in the first (t_i - 1)-th bag. We show that the algorithm is a 2-approximation. As there is PTAS proposed in <cit.> for the minsum bin packing problem, one may think that we can extend that PTAS to support items with arrival times. Unfortunately, by the requirement that we cannot insert particular items in some bags, we strongly believe that the extension is not straightforward. We are aiming to give that extension as our future work. In Section 4, we extend the bin packing algorithm presented in Section 3 to address our main problem. We demonstrate that the extended algorithm is a 2-approximation when all capacities are uniform, and there is only one facility. It is worth noting that several works in dynamic network flows also make this assumption of uniform capacities <cit.> and a single facility <cit.>. § PROBLEM DEFINITIONS In this section, we define our problem called minsum problem for discrete and weighted set flow on a dynamic path network (MS-DWSF). Consider a path graph with n nodes, denoted by P_n = (V = {1,…,n}, E = {{i,i+1}: 1 ≤ i ≤ n - 1 }). Each node i has m_i sets of persons to evacuate. Those sets of persons are denoted by G_i,1, …, G_i,m_i. For group G, the size of G is denoted by S(G) ∈ℤ_+ and the weight of G is denoted by w(G) ∈ℤ_+. The capacity of all edges is C ∈ℤ_+. Each edge e has distance d(e) ∈ℤ_+, which is the time that persons need to move between two terminals of the edge. Suppose that the single aiding facility is located at a ∈ V. People originally at node i < a must move in a direction that increases the node number they are at, while people originally at node i > a must move in a direction that decreases the node number they are at in any optimal solution. Let us denote the collection of groups that are at node i at time t by 𝒮_i^(t). We select from 𝒮_i^(t) which groups to be sent along the edge {i, i + 1} for i < a and along the edge {i - 1, i} for i > a. We denote the collection of groups that we choose to send by D_i^(t). The summation of group sizes in D_i^(t) must not be larger than C, i.e. ∑_G ∈ D_i^(t) S(G) ≤ C. For t = 0, we have 𝒮_i^(0) = {G_i,1, …, G_i,m_i} for all i. Let denote A_i^(t) be a collection of groups arriving at i from node i - 1 at time t and denote B_i^(t) be a collection of groups arriving at i from node i + 1 at time t. We have A_i^(t) = D_i-1^(t - d({i - 1, i})) when 1 < i ≤ a and t ≥ d({i-1,i}) and A_i^(t) = ∅ otherwise. Similarly, B_i^(t) = D_i+1^(t - d({i, i + 1})) when a ≤ i < n and t ≥ d({i,i + 1}) and B_i^(t) = ∅ otherwise. Then, 𝒮_i^(t) = 𝒮_i^(t - 1)\ D_i^(t)∪ A_i^(t)∪ B_i^(t). The arrival time of G, denoted by α(G) is the earliest time that the group is at a, i.e. min{t: G ∈𝒮_a^(t)}. In the MS-DWSF, we aim to minimize ∑_G w(G)α(G). We show that the problem is NP-hard in Appendix. § MINSUM BIN PACKING PROBLEM FOR WEIGHTED ITEMS WITH DIFFERENT READY TIMES To address the MS-DWSF problem, we first introduce a related problem called the minsum bin packing problem for weight items with different ready times (MS-BPWRT). In this section, we present a 2-approximation pseudo-polynomial time algorithm for the MS-BPWRT problem. We will then use the solution obtained from this algorithm to develop a 2-approximation pseudo-polynomial time algorithm for the MS-DWSF in the following section. §.§ Definition of MS-BPWRT The MS-BPWRT problem can be defined in the following definition: Given a collection of groups 𝒢 = {G_1, …, G_m}. Each group G_i has size S(G_i), weight w(G_i), and ready time τ(G_i). We find a way to pack those groups into a set of bins B_1, …, B_T ⊆𝒢 with capacity C with the following constraints: 1) ⋃_1 ≤ j ≤ m B_j = {G_1, …, G_m}, 2) B_j ∩ B_j' = ∅ for j ≠ j', 3) For all 1 ≤ j ≤ T, ∑_G ∈ B_j S(G) ≤ C, and 4) Denote t(G) = j when G ∈ B_j, we must have t(G) ≥τ(G). We aim to minimize ∑_G w(G) t(G). When w(G) = 1 for all G and we do not have the fourth constraint, the MS-BPWRT is equivalent to the minsum bin packing problem <cit.>. We use some ideas from the minsum bin packing problem to provide an algorithm and prove the approximation ratio for the MS-BPWRT. We have included the weight of group G, denoted as w(G), in the problem formulation because we recognize that different groups may have varying levels of importance. The ready time, τ(G), signifies that group G cannot be placed in any bin with an index less than τ(G). In other words, group G is not ready to be inserted until time τ(G). §.§ Approximation Algorithm for MS-BPWRT The approximation algorithm for MS-BPWRT is described in Algorithm <ref>. The collection 𝒢 contains groups that have not been placed in any bin, while the collection 𝒢_j' is a candidate set for bin B_j. If any remaining group can be considered a candidate for B_j by replacing the code in Line 3 with 𝒢_j' ←𝒢', then Algorithm <ref> becomes the next fit decreasing algorithm <cit.>, based on the ratio w(G)/S(G). It is worth noting that the minsum bin packing algorithm in <cit.> uses the next fit increasing algorithm based on s(G) (or the next fit decreasing algorithm based on 1/s(G)). The criteria for the next fit algorithm is how we apply the weights w(G) to the minsum bin packing problem. We consider the ready times at Line 3 of the algorithm. The collection 𝒢_j' is the collection of groups that have not been added to any bin of which the ready time τ(G) satisfies ⌈ (τ(G) - 1) / 2 ⌉× 2 + 1 ≤ j. We know that ⌈ (τ(G) - 1) / 2 ⌉× 2 + 1 = τ(G) when τ(G) is odd and ⌈ (τ(G) - 1) / 2 ⌉× 2 + 1 = τ(G) + 1 when τ(G) is even. Recall that 𝒢_j' is the candidate to be added to B_j. For G such that τ(G) is odd, we add G to the candidate set of B_j for any j ≥τ(G) that matches with the ready time constraint. On the other hand, for G such that τ(G) is even, we do not add G to the candidate set of B_j for j = τ(G), but add only when j ≥τ(G) + 1. Informally, we delay the addition of G by one bin here. §.§ Proof for Approximation Ratio We prove that the algorithm in the previous subsection is a two-approximation algorithm for the MS-BPWRT problem. First, we define the relaxed version of MS-BPWRT in the following definition: Suppose we have a collection of groups 𝒢 = G_1, …, G_m, where each group G_i has a size of S(G_i), a weight of w(G_i), and a ready time of τ(G_i). We are given a capacity C. For each 1 ≤ i ≤ m and 1 ≤ j ≤ T, we find x_ij∈ [0,1] such that 1) ∑_j x_ij = 1 for all i, 2) ∑_i S(G_i) x_ij≤ C for all j, and, 3) for each x_ij > 0, we must have j ≥τ(G_i). Our goal is to minimize ∑_i,j j · w(G_i) · x_ij. Informally speaking, in the MS-BPWRT-REAL problem, each group can be partially assigned to each bin. The variable x_ij represents the proportion of group G_i assigned to bin B_j. Let OPT(τ), OPT_R(τ) be the optimal value of the MS-BPWRT(τ) and MS-BPWRT-REAL(τ) problems. We have the following properties: OPT_R(τ) ≤ OPT(τ). Let B_1^*, …, B_T^* be an optimal solution of MS-BPWRT(τ), T^*(G_i) = j if G_i ∈ B_j^*, and let x_ij' = 1 if G_i ∈ B_j^* and x_ij' = 0 otherwise. We know that ⟨ x_ij' ⟩_i,j is a solution of MS-BPWRT-REAL(τ) because 1) ∑_j x'_ij = 1 for all i because each G_i is a member of exactly one bin by constraints 1) and 2) of MS-BPWRT(τ), 2) For all j, ∑_i S(G_i)x'_ij = ∑_G_i ∈ B^*_j S(G_i) ≤ C by the third constraint of MS-BPWRT(τ), 3) When x'_ij > 0, G_i ∈ B_j^* and, by the fourth constraint of MS-BPWRT(τ), j ≥τ(G). The objective value of ⟨ x_ij' ⟩_i,j is ∑_i,j w(G_i) (j · x_ij') = ∑_i w(G_i) ∑_j (j · x_ij') = ∑_i w(G_i) T^*(G_i) = OPT(τ). We then know that there is a solution of MS-BPWRT-REAL(τ) with objective value OPT(τ). The optimal value of MS-BPWRT-REAL(τ) must not be larger than OPT(τ), i.e. OPT_R(τ) ≤ OPT(τ). Let τ,τ' be a function such that τ(G) ≤τ'(G) for all G ∈𝒢. Then, OPT_R(τ) ≤ OPT_R(τ'). Let ⟨ x^*_ij⟩_i,j be an optimal solution of the MS-BPWRT-REAL(τ') problem. It is straightforward to show that ⟨ x^*_ij⟩_i,j satisfies the first and the second constraints of MS-BPWRT-REAL(τ). Also, because for each x_ij > 0, we have j ≥τ'(G_i) ≥τ(G_i), we know that ⟨ x^*_ij⟩_i,j also satisfies the third constraint of MS-BPWRT-REAL(τ), and is a feasible solution of MS-BPWRT-REAL(τ). The objective value of ⟨ x^*_ij⟩ is OPT_R(τ'). We then know that there is a solution of MS-BPWRT-REAL(τ) with objective value OPT_R(τ'). The optimal value of MS-BPWRT-REAL(τ) must not be larger than OPT_R(τ'), i.e. OPT_R(τ) ≤ OPT_R(τ'). Denote a solution from Algorithm <ref> by B_1', …, B_T'. Let x_ij' = 1 when G_i ∈ B'_2j - 1∪ B'_2j. It is clear that ⟨ x_ij' ⟩_i,j is not a feasible solution of MS-BPWRT-REAL(τ'). We prove a property of ⟨ x_ij' ⟩_i,j in the following proposition: For all j such that B'_2j≠∅, ∑_i S(G_i) x_ij' > C. At Line 3 of Algorithm <ref>, we define the set 𝒢_j'. We can observe that both 𝒢_2j - 1 = {G ∈𝒢': τ(G) ≤ 2j - 1} and 𝒢_2j = {G ∈𝒢': τ(G) ≤ 2j - 1} are defined in the same manner. This implies that, for any integer k, even if we increase k from 2k - 1 to 2j at Line 6, the set of groups considered remains unchanged. Let G' be the first element added to the bin B_2j'. It is a group in G'_2j\ (B'_1 ∪…∪ B'_2j - 1) which maximizes w(G)/S(G). Since the set of groups considered for bins B'_2j-1 and B'_2j are the same, G' must have already been considered for inclusion in B'_2j - 1. However, it was not added to B'_2j - 1 because doing so would result in ∑_G ∈ B'_2j - 1 S(G) + S(G') > C. Therefore, by the definition of x_ij', we have ∑_i S(G_i) · x_ij' = ∑_G ∈ B'_2j - 1 S(G) + ∑_G∈ B'_2j S(G) ≥∑_G ∈ B'_2j - 1 S(G) + S(G') > C. This completes the proof. Next, we define a problem called weight maximization problem (WM) as follows: Suppose we have a collection of groups 𝒢 = G_1, …, G_m, where each group G_i has a size of S(G_i), a weight of w(G_i), and a ready time of τ(G_i). We are given a capacity C. For each 1 ≤ i ≤ m and 1 ≤ j ≤ T', we find x_ij∈ [0,1] such that: 1) ∑_j x_ij≤ 1 for all i, 2) ∑_i S(G_i) x_ij≤ C_j for all j, and, 3) for each x_ij > 0, we must have j ≥τ(G_i). We aim to maximize ∑_i,j w(G_i) · x_ij. Let C'_j = ∑_i S(G_i)x_ij', and let τ' be a function such that, for all G_i ∈𝒢, τ'(G_i) = ⌈τ(G_i) / 2 ⌉. We then can show the following property: For all 1 ≤ T' ≤ T, ⟨ x_ij' ⟩_j ≤ T',i is an optimal solution of WM(τ',C'_1, …, C'_T'). We prove this proposition by induction on T'. Let us examine the scenario where T' = 1. Remember from Algorithm <ref> that the bins B_1 and B_2 contain groups from the set {G ∈𝒢: τ(G) = 1} that maximize the ratio w(G)/S(G). Using the greedy algorithm, any collection of groups 𝒟⊆{G ∈𝒢: τ'(G) = 1} with ∑_G ∈𝒟 S(G) ≤ C'_1 must satisfy ∑_G ∈𝒟 w(G) ≤∑_G ∈ B_1 ∪ B_2 w(G) = ∑_i w(G_i) · x_i1'. Therefore, we can deduce that the sequence ⟨ x_ij' ⟩_j=1,i represents an optimal solution for WM(τ',C'_1). Next, let us assume the proposition holds true for all T' ≤𝖳. We will assume, aiming for a contradiction, that the sequence ⟨ x_ij' ⟩_j ≤𝖳, i does not represent an optimal solution for WM(τ', C'_1, …, C'_𝖳). Let us say an optimal solution for WM(τ', C'_1, …, C'_𝖳) is represented by the sequence ⟨ x_ij^* ⟩_j ≤𝖳, i. Based on our assumption that the sequence ⟨ x'_ij⟩_j ≤𝖳 - 1, i is an optimal solution for WM(τ', C'_1, …, C'_𝖳 - 1), it follows that ∑_j ≤𝖳 - 1, i w(G_i) · x'_ij≥∑_j ≤𝖳 - 1, i w(G_i) · x_ij^*. To satisfy the condition ∑_j ≤𝖳, i w(G_i) · x'_ij≥∑_j ≤𝖳, i w(G_i) · x_ij^*, it is necessary to have ∑_i w(G_i) x_i𝖳' < ∑_i w(G_i) x_i𝖳^*. Recall from the construction that x_ij' ∈{0,1} for all i. To have ∑_i w(G_i) x_i𝖳' < ∑_i w(G_i) x_i𝖳^*, there must be i^* such that x_i^*𝖳' = 0, x_i^*𝖳^* > 0, and w(G_i^*)/S(G_i^*) > w(G_i)/S(G_i) for all i such that x_i𝖳' = 1. Consider the case that x'_i^*𝖳' = 1 for some 𝖳' < 𝖳. Then, in the solution ⟨ x_ij^* ⟩_j ≤𝖳,i, we move G_i^* to the bin B'_2𝖳' - 1∪ B'_2𝖳'. Let s = min{S(G_i^*)x^*_i^*𝖳, C_𝖳'' - ∑_i S(G_i) x^*_i𝖳'}. If s > 0, we have more spaces to put the group G_i^*. We then decrease the value of x^*_i^*,𝖳 by s/S(G_i^*) and increase the value of x^*_i^*,𝖳' by the same value. If C'_𝖳' - ∑_i S(G_i) x^*_i𝖳' = 0, there is no space left in the bin B_𝖳'. We then have to swap G_i^* with some other groups. There is an item i' such that x^*_i'𝖳' > 0 while x'_i'𝖳' = 0. Let s = min{S(G_i')x^*_i'𝖳', S(G_i^*)x^*_i^*𝖳}. We can update the value of ⟨ x^*_ij⟩_i,j in the following ways: 1) decrease the value of x^*_i^*𝖳 by s/S(G_i^*), 2) decrease the value of x^*_i'𝖳' by s/S(G_i'), 3) increase the value of x^*_i^*𝖳' by s/S(G_i^*), and 4) increase the value of x^*_i'𝖳 by s/S(G_i'). Informally, we exchange s units of G_i^* in bin 𝖳 with an equal mass of G_i' in bin 𝖳'. This updated result continues to be a feasible solution for WM(τ',C'_1, …, C'_T), and the objective value remains unchanged. We can iterate the update in the previous paragraph until there is no i^* such that x'_i^*𝖳' = 1 for some 𝖳'. It is sufficient to only consider the case when, for all such i^*, we have not included the group G_i^* to bins B_1, …, B_2𝖳 - 2. However, by the assumption that w(G_i^*)/S(G_i^*) > w(G_i)/S(G_i) for all i such that x_i𝖳' = 1, the greedy algorithm must have already included the group i^* to the bin B_2𝖳 - 1. This gives x'_i^*𝖳 = 1, which contradicts our assumption that x'_i^*𝖳 = 0. From the next proposition, let us consider the problem WM(τ', C_1, …, C_T') where C_1 = C_2 = ⋯ = C_T' = C. We denote the optimal solution of the problem by OPT_WM(T'). For all 1 ≤ T' ≤ T, OPT_WM(T') ≤∑_j ≤ T', i w(G_i) · x'_ij. Let T' be such that B'_2T'≠∅. Using the definition of C_j and Proposition <ref>, we have C_j = ∑_i S(G_i) x'_ij > C. An optimal solution for WM(τ', C, …, C) is a feasible solution for WM(τ', C_1', …, C_T''). Therefore, the objective value of an optimal solution for WM(τ', C_1', …, C_T''), which is ∑_j ≤ T', i w(G_i) · x'_ij according to Proposition <ref>, cannot be less than OPT_WM(T'). When B'_2T' = ∅, it implies that all items have been allocated to the bins B'_1, …, B'_2T'-1. In this case, the value of ∑_j ≤ T',i w(G_i) x_ij' is equal to ∑_i w(G_i), which is greater than or equal to the sum of the weights of any feasible solution. Hence, we have ∑_j ≤ T',i w(G_i) x_ij' = OPT_WM(T'). The next lemma gives a relationship between the sequence ⟨ x'_ij⟩_i,j and the MS-BPWRT-REAL problem. OPT_R(τ') ≥∑_i,j j · w(G_i) · x'_ij Let 𝒳 be a collection of all feasible solutions of the MS-BPWRT-REAL(τ'), and let W = ∑_i w(G_i). By Proposition <ref>, we have that OPT_R(τ') = min_⟨ x_ij⟩_i,j∈𝒳∑_j ∑_i j · w(G_i) · x_ij = min_⟨ x_ij⟩_i,j∈𝒳∑_T'∑_j ≥ j', i w(G_i) · x_ij≥∑_T'min_⟨ x_ij⟩_i,j∈𝒳[ W - ∑_j < T',i w(G_i) · x_ij] = ∑_T'[W - max_⟨ x_ij⟩_i,j∈𝒳∑_j < T',i w(G_i) · x_ij] ≥∑_T'[W - ∑_j < T',i w(G_i) · x'_ij] = ∑_T'∑_j ≥ j',i w(G_i) · x_ij' = ∑_i,j j · w(G_i) · x'_ij. We are now ready to prove the main theorem of this section. The bin B'_1, …, B'_T obtained from Algorithm <ref> is a 2-approximation solution for MS-BPWRT. Let SOL be an objective value of B'_1, …, B'_T. We have that SOL = ∑_j∑_G_i ∈ B'_j j · w(G_i) ≤∑_j∑_G_i ∈ B'_2j - 1∪ B'_2j 2j · w(G_i) ≤ 2 ·∑_i,j j · w(G_i) · x'_ij≤ 2 · OPT_R(τ') ≤ 2 · OPT_R(τ) ≤ 2 · OPT(τ). The inequality at Line 3 of the chain is obtained from the definition of ⟨ x_ij' ⟩_i,j. The inequality at Line 4 is obtained from Lemma <ref>, the inequality at Line 5 is obtained from Proposition <ref>, and the inequality at Line 6 is obtained from Proposition <ref>. § APPROXIMATION ALGORITHM FOR MS-DWSF In this section, we will develop an approximation algorithm for our main problem, MS-DWSF, utilizing the findings presented in the previous section. §.§ Algorithm Our two-approximation algorithm for the MS-DWSF is shown in Algorithm <ref>. The algorithm addresses congestion on the busiest edge. Specifically, when the destination node is denoted as a, the most congested edges are {a-1,a} and {a,a+1}. To tackle this issue, we can examine separate strategies for each of these edges. It is worth noting that the concepts behind both strategies are the same, so we will only elaborate on the approach for edge {a-1,a} here. To transmit all groups in the set {G_i,j: i < a} through the edge {a - 1, a}, we rely on the results of Algorithm 1 (denoted by B'_1, …, B'_T) to determine the appropriate timing for each item. At time T', items within bin B_T' are dispatched along the {a - 1, a} edge. The MS-DWSF constraint requires that all groups G_i,j∈ B_T' be present at node a-1 during transmission. To satisfy this condition, if i ≤ a-2, the group is sent from node a-2 to a-1 at the time T' - d({a-1,a-2}). Similarly, if i ≤ v, the group is sent from node v to a-1 at time T' - d(v,a-1), following the same idea. The collection of groups transmitted from node i at time t is obtained by taking the intersection of B'_t + d(i,a-1) with {G_i',j: i' ≤ i}, as assigned in Line 3 of the algorithm. Since group G_i,j is initially located at node i, it cannot reach node a-1 before time d(i,a-1). Thus, it is not possible to assign group G_i,j to bin B_j for j < d(i,a-1). This is why we set τ(G_i,j) = d(i,a-1) in Line 2 of the algorithm. §.§ Feasibility and Approximation Ratio In this subsection, we show that Algorithm <ref> always gives a feasible solution. Then, we show that it is a two-approximation ratio for MS-DWSF. ⟨ D_i^(t)⟩_i,t in Algorithm <ref> is a feasible solution to MS-DWSF. To show that the solution of Algorithm <ref> is feasible, we need to show that, for all i and t, ∑_G ∈ D_i^(t) S(G) ≤ C and D_i^(t)⊆ S_i^(t). The first inequality can be shown by the fact that D_i^(t)⊆ B'_t + d(i,a-1) and ∑_G ∈ B'_t + d(i,a-1) S(G) ≤ C by the constraint of MS-BPWRT. We will now demonstrate that D_i^(t)⊆ S_i^(t). Suppose we have a group G_i',j∈ D_i^(t). Since G_i',j∈ D_i^(t), we have G_i',j∈ B'_t + d(i, a - 1). By the MS-BPWRT constraint, we know that G_i',j cannot be assigned to any bin B_T' for T' < t + d(i, a - 1). As a result, G_i',j is not in D_i^(t') for any t' < t. Hence, for i' = i, we conclude that G_i',j∈ S_i^(t). For i' < i, we have that G_i',j∈ B'_t + d(i,a-1) = B'_t - d({i - 1, i}) + d(i - 1, a - 1). For t' = t - d({i - 1, i}), we have G_i',j∈ B'_t' + d({i - 1, a - 1}) and G_i',j∈ D_i - 1^(t'). By the problem definition of MS-DWSF, we know that G_i',j∈ S_i^(t) when G_i',j∈ D_i - 1^(t - d({i - 1, i}). The next theorem will show that Algorithm <ref> is a two-approximation algorithm for MS-DWSF. Let α'(G) be the time that G arrives at the node a in Algorithm <ref>, and let OPT_D be an optimal solution of the MS-DWSF problem. We have that ∑_G w(G) α'(G) ≤ 2 · OPT_D. We can construct a feasible solution of MS-BPWRT from an optimal solution of MS-DWSF by setting B_T' to D_a - 1^(T') for all T'. Let W_D = d({a-1,a}) ∑_G w(G). As a group G_i,j∈ D_a - 1^(T') arrives at the destination node a at time T' + d({a-1,a}), we have that ∑_T'∑_G ∈ B_T' T' · w(G) = ∑_T'∑_G ∈ B_T' (α(G) - d({a - 1, a})) · w(G) = ∑_G α(G) w(G) - W_D = OPT_D - W_D. If OPT_B is an optimal value of MS-BPWRT, we have that OPT_B ≤ OPT_D - W_D. Algorithm <ref> gives a solution of MS-BPWRT of which the objective function, denoted by SOL_B, is not larger than 2 · OPT_B. From that solution, we can construct a solution of MS-DWSF using Line 3 of Algorithm <ref>. The objective value of the MS-DWSF solution, denoted by SOL_D, is SOL_B + W_D. We then obtain that SOL_D = SOL_B + W_D ≤ 2SOL_B + W_D ≤ 2 OPT_D. § CONCLUSION This paper presents an extension of the minsum bin packing problem, which considers items with varying ready times and weights. We propose a 2-approximation algorithm for this new problem and apply it to develop an evacuation method for non-separable groups of individuals. At present, our algorithm is limited to path graphs with a single destination. However, we are actively working on expanding its capabilities to handle multiple destinations and non-path network structures. splncs04 § APPENDIX §.§ NP-Hardness of MS-DWSF We show that MS-DWSF is NP-hard by a reduction to the partition problem in the following theorem. MS-DWSF is NP-hard even when the input path graph has two nodes. Recall that, in the partition problem <cit.>, we have 𝗆 items, denoted by {1, …, 𝗆}. The size of items i ∈{1, …, 𝗆} is 𝗌(i) ∈ℤ_+. Suppose that ∑_i 𝗌(i) = 2𝖢. We aim to answer if there is 𝖲⊆{1, …, 𝗆} such that ∑_i ∈𝖲𝗌(i) = 𝖢. It is known that the partition problem is NP-hard. Now, let us consider an instance of the MS-DWSF such that there are two nodes {1,2} on the path graph, and the facility is located at node 2. The number of groups at node 1 (denoted by m_1) is 𝗆. We also have S(G_1,i) = w(G_1,i) = 𝗌(i) for all 1 ≤ i ≤ m_1, C = 𝖢, and d({1,2}) = 1. Since ∑_i S(G_1,i) = 2𝖢 = 2C, if there is 𝖲 such that ∑_i∈ S S(G_1,i) = 𝖢 = C, we can send all the groups in two units of time by setting D_1^(1) = {G_1,i : i ∈𝖲} and D_1^(2) = {G_i,1, …, G_i,m_i}\ D_1^(1). It is clear that those D_1^(1), D_1^(2) are the optimal solution as any other sets would give larger objective values. If there exists 𝖲 such that ∑_i∈ S S(G_1,i) = 𝖢 = C, the optimal value of MS-DWSF would be 3C. If there is no such S, we cannot send all the groups in two unit times. The optimal value must be larger than 3C. Hence, if we can solve the MS-DWSF problem, we can give an answer to the partition problem.
http://arxiv.org/abs/2407.03138v1
20240703141841
Superselection rules and bosonic quantum computational resources
[ "Eloi Descamps", "Nicolas Fabre", "Astghik Saharyan", "Arne Keller", "Pérola Milman" ]
quant-ph
[ "quant-ph" ]
http://arxiv.org/abs/2407.02135v1
20240702102835
The role of the effective range in strongly-interacting few-body systems
[ "Lucas Madeira" ]
physics.atom-ph
[ "physics.atom-ph", "cond-mat.quant-gas", "nucl-th", "physics.atm-clus" ]
] [1]Lucas Madeiramadeira@ifsc.usp.br *[1]Instituto de Física de São Carlos, Universidade de São Paulo, Av. Trabalhador Sancarlense, 400, São Carlos, CP 369, 13560-970, São Paulo, Brazil Strongly interacting systems appear in several areas of physics and are characterized by attractive interactions that can almost, or just barely, loosely bind two particles. Although this definition is made at the two-body level, this gives rise to fascinating effects in larger systems, including the so-called Efimov physics. In this context, the zero-range theory aims to describe low-energy properties based only on the scattering length. However, for a broad range of physical applications, the finite range of the interactions plays an important role. In this work, I discuss some aspects of finite-range effects in strongly interacting systems. I present the zero-range and shapeless universalities in two-body systems with applications in atomic and nuclear physics. I derived an analytical expression for the s-wave bound-state spectrum of the modified Pöschl-Teller potential for two particles in three dimensions, which is compared with the approximations to illustrate their usefulness. Concerning three identical bosons, I presented a trimer energy scaling function that explicitly includes the effective range. The implications for larger systems are briefly discussed. [ * July 8, 2024 ================ § INTRODUCTION In quantum systems near unitarity, i.e. diverging two-body scattering length, the particles are distributed in spatial scales larger than the interaction range, making the specific interparticle potential less crucial for reproducing the ground-state spectrum <cit.>. This understanding is key to explaining phenomena like the Thomas collapse <cit.> and the Efimov effect <cit.>, where the former involves the collapse of the three-body ground state as the interaction range diminishes, and the latter indicates an infinite number of three-body bound states at the unitary limit. Both phenomena are linked through a scale transformation <cit.>. Bound and resonant states appear as we approach the unitary limit, showing independence from the specifics of the two-body potential <cit.>. This phenomenon, observed by Phillips <cit.> in his study of the correlation between the triton binding energy and the nucleon-deuteron scattering length, was further explained by Efimov and Tkachenko through the zero-range theory <cit.>. The study of few-nucleon correlations, which elucidated phenomena like the Thomas collapse and the Efimov effect, is heavily in debt of the pioneering mathematical work on three-body problems by Skorniakov and Ter-Martirosian <cit.>, Danilov <cit.>, and Faddeev <cit.>. Initially linked to renormalization groups <cit.>, the Efimov effect is characterized by a unique scaling symmetry <cit.> and related to renormalization group limit cycles <cit.>. Its significance across both nuclear and atomic few-body systems led the community to expand this universality to more complex systems through experimental and theoretical studies <cit.> since it highlights the universal behavior in three-body systems with infinite scattering length. The exploration of Efimov states in physical systems, initially theoretical, expanded into empirical research on configurations of a few nucleons and atoms <cit.>. Despite challenges in nuclear physics due to nucleon-nucleon interaction properties, identifying exotic nuclear systems with two-neutron halos <cit.> has provided a promising avenue for investigating potential Efimov states <cit.>. The first prediction of Efimov states in few-atomic systems, specifically in helium gases at low temperatures, was made by Lim et al. in 1977 <cit.>. This was followed by further substantiation regarding an excited Efimov state in the three-helium atomic system by Cornelius in 1986 <cit.>. In 2015, Kunitski et al. experimentally confirmed the excited Efimov state in the ^4He trimer <cit.>. Discussions on the ultracold collision properties of ^4He trimers have been ongoing <cit.>, with recent studies exploring collisions involving a ^4He dimer and a third atomic particle, including ^4He, ^6,7Li, and ^23Na, in the context of Efimov physics <cit.>. The 1995 experimental achievement of Bose-Einstein condensation in ultra-dilute atom clouds <cit.> marked a significant advancement in atomic physics, further enhanced by the ability to manipulate atom-atom interactions through Feshbach resonances <cit.>. This paved the way for the experimental discovery of Efimov states in atomic systems, with the first evidence found in ultracold cesium atoms <cit.>. The ongoing exploration of Efimov states is a focus of both theoretical <cit.> and experimental research <cit.> in the field. The Efimov effect extends to systems with more than three particles. Notably, Tjon discovered a correlation, known as the Tjon line <cit.>, between the binding energies of tetramers and trimers in ^4He, which connects back to Efimov physics <cit.>. Additionally, Coester et al. explored how variations in nuclear-matter binding energy relate to two-body potentials with equivalent phase shifts <cit.>. The above discussion is focused on the scattering length, but for a broad range of physical systems, the range of the interactions cannot be neglected. In this work, I aim to discuss some aspects of finite-range corrections to properties of strongly interacting few-body systems. The main goal driving the different approaches is to extend the universality region by taking into account the interaction range, the same reasoning behind going from a zero-range theory to a finite-range one in two-body systems. This work is organized as follows. Section <ref> deals with two-body systems. In Sec. <ref>, the effective range expansion is introduced to motivate the concepts of zero-range and shapeless universalities. Section <ref> discusses their applicability to physical systems. In Sec. <ref>, a microscopic two-body potential is used to illustrate both approximations, and it is compared to an analytical expression for the bound-state spectrum of the modified Pöschl-Teller potential for two particles in three dimensions, derived in Appendix <ref>. Section <ref> deals with three identical bosons, focusing on the formalism introduced in Ref. <cit.>. Finally, the conclusions are presented in Sec. <ref>, briefly discussing implications for larger systems. § THE TWO-BODY SECTOR §.§ Zero-range and shapeless universalities Low-energy scattering theory allows a universal description of two-body scattering for local finite-ranged spherically symmetric potentials <cit.>. A seminal work by Hans Bethe <cit.> related the s-wave scattering length a, the effective range r_0, and the s-wave phase shift δ_0(k) through k δ_0(k) = -1/a+r_0 k^2/2 + 𝒪(k^4). The zero-range theory corresponds to the case where the range of the potential is much smaller than the other typical length scales of the system, and thus the 𝒪(k^2) term in Eq. (<ref>) can be neglected and we have a description in terms of only the scattering length. In situations where the range of the potential is small but non-negligible, we can include higher-order contributions by considering the effective range r_0. Equation (<ref>) is often called shapeless or shape-independent approximation because higher-order terms depend on the shape of the two-body potential. The usefulness of this equation is that two different microscopic potentials, which can be of entirely distinct functional forms, yield the same low-energy phase shifts, provided that both have the same scattering length and effective range. In systems without a three-body scale, such as two-component Fermi gases, Eq. (<ref>) has facilitated comparisons of results obtained with potentials of diverse shapes: square-well <cit.>, modified Pöschl-Teller (mPT) <cit.>, and the s-wave component of nuclear potentials <cit.>. In Fig. <ref>, we show different two-body potentials used in Ref. <cit.> to investigate the universality of cold fermionic gases and low-density neutron matter. For the present discussion, we focus on the two potentials employed to model the neutron-neutron interactions: the s-wave component of AV18 <cit.> and the modified Pöschl-Teller potential tuned to the same scattering length and effective range. These two interactions differ considerably in shape: the former has a strong short-range repulsion and a weakly attractive tail, while the latter is purely attractive. However, since they reproduce the same scattering length and effective range, the low-energy properties investigated in Ref. <cit.> using both potentials were in agreement, a consequence of Eq. (<ref>). §.§ Loosely-bound dimers in physical systems The two-body s-wave scattering amplitude is given by <cit.> f(k)=1/kδ_0-ik . Its pole is related to the bound or virtual dimer energy E_B=-ħ^2/2 m_r a_B^2, where m_r is the reduced mass of the system and the binding length a_B is related to the scattering length and effective range through <cit.> r_0/a_B=r_0/a+1/2r_0^2/a_B^2. We kept only the first two terms of the kδ_0(k) effective range expansion, Eq. (<ref>), in this expression. Since Eq. (<ref>) reduces to E_ zr=-ħ^2/2 m_r a^2 in the zero-range limit (negligible effective range), we can compare both approximations to physical systems to better understand the impact of the effective range in their low-energy properties. In Table <ref>, I summarized the scattering length and the binding length of two-body systems belonging to atomic and nuclear physics <cit.>. The closer these two quantities are, the less important is the range of the potential. Figure <ref> is a graphical representation of Table <ref> to illustrate how the selected physical systems relate to the zero-range and shapeless universality regimes. In panel (a), only the scattering length is taken into account, Eq. (<ref>); hence the universal regime is when a=a_B. We can see that the ^4He dimer, two neutrons, and the unbound state of a proton and a neutron (the 0^+ channel) are close to this limit. However, the other systems are far from this regime, and the zero-range theory yields a crude description of their properties. In panel (b), I plot Eq. (<ref>), which takes into account the effective range. We can see that all the considered physical systems are close to the curve, indicating that a description including the effective range for these systems is adequate. §.§ Illustration with a microscopic potential When modeling the physical systems introduced in Sec. <ref> with a microscopic potential, an explicit functional form must be chosen. Here, I present the zero-range and finite-range approximations of the dimer energy considering the modified Pöschl-Teller potential, which has been successfully used to describe interactions in cold atom systems <cit.>. It can be written as V_ mPT(r)=- ħ^2 μ_ PT^2/m_rλ_ PT(λ_ PT-1)/cosh^2(μ_ PT r). The potential is illustrated in Fig. <ref> for two different sets of the parameters λ_ PT and μ_ PT, which are tuned to reproduce the desired scattering length and effective range. This potential is a common choice since it is smooth, and there is an analytical expression that relates the parameters λ_ PT, μ_ PT, and the scattering length <cit.>. Moreover, in this work, I derived an analytical expression for the s-wave bound-state energies of this potential in three dimensions, E_ PT=-ħ^2μ_ PT^2/2m_r(λ_ PT-2-2n)^2, (n=0,1,2,... and n⩽λ_ PT/2-1), where the derivation is presented in Appendix <ref>. In Fig. <ref>, I compare the analytical result for the bound-state energies of the mPT dimer, obtained by choosing n=0 and λ_PT>2 in Eq. (<ref>), with the low-energy approximations. If the effective range is small compared to the scattering length (r_0/a≪ 1), then both approximations are in excellent agreement with the analytical solution. However, as the ratio r_0/a increases, the zero-range approximation deviates more from the result than the finite-range one. Remarkably, the shapeless approximation yields results that differ by only a few per cent for r_0/a ≲ 0.1. § BOSONIC TRIMERS I discussed only two-body systems in Sec. <ref>. Although it is more challenging to consider range corrections systematically for N>2 systems, much progress has been made. Initially, linear range corrections were introduced to elucidate the Phillips line <cit.>, sparking numerous studies on range corrections across various systems <cit.>. These corrections have significant implications, especially with the ongoing experimental explorations in cold-atom physics <cit.>, highlighting the need for accurate expansion parameters near the unitary regime <cit.>. An approach that has yielded very interesting and promising results is a Gaussian parametrization of the universal region <cit.>. In this section, I will focus on an approach to describe Efimov trimers through a universal energy scaling function that explicitly considers the effective range. Reference <cit.> aimed to develop range corrections for a trimer of identical bosons near unitarity, using a universal scaling function to relate the trimer energies with different scattering lengths and effective ranges. This novel framework allows for systematic extensions to larger systems. A three-body scale is needed to avoid the Thomas collapse in a three-boson system with a s-wave zero-range force, as the two-body scattering length alone does not suffice for determining the low-energy properties of the trimer. By incorporating corrections from the finite range through the effective range expansion and selecting a reference three-body energy at unitarity E_3(1/a=0,r_0,ν) (where ν is a three-body scale), we can combine the scattering length and effective range to create two dimensionless quantities: x = ħ/a√(-m E_3(0,r_0,ν)), y = r_0√(-m E_3(0,r_0,ν))/ħ . These definitions were constructed such that x=0 yields the unitary limit and y=0 the zero-range limit. The energy scaling function F(x,y) is defined as F(x,y)=E_3(1/a,r_0,ν)/E_3(0,r_0,ν). Extensive studies have been conducted on the zero-range limit of Eq. (<ref>), presented in a different version that retains the same information <cit.>. The calculation of binding energies with exceptional accuracy has been reported in several references <cit.>. We aim to expand upon the zero-range limit by calculating trimer energies that account for finite effective ranges. While formulating an analytic expression for the scaling function in Eq. (<ref>) poses difficulties, certain characteristics of it have been identified in the literature <cit.>. To obtain the scaling function, we utilized the solutions of the Skorniakov and Ter-Martirosian (STM) equation, incorporating the first-order effective range corrections as outlined in Ref. <cit.>. The proposed expression of the scaling function is given by: F(x,y)=1+c_1 x+c_2 xy^σ+c_3 x^2+c_4 x^2y+c_5 x^2y^σ, representing a series expansion in terms of x and y. The coefficients c_i and the exponent σ were obtained by fitting the STM data to this equation, with their specific values provided in Ref. <cit.>. For comparison, the scaling function and the STM findings are illustrated together in Fig. <ref>. After determining the energy scaling function from the STM equation with effective range corrections, the next task was to compute it with microscopic two- and three-body interactions, which was done with quantum Monte Carlo (QMC) methods, as described in Ref. <cit.>. The results indicated model dependence for large values of y, but they agreed with the scaling function for small values of y. The conditions that yield the universal behavior described by the scaling function were investigated, and the conclusion was that universal behavior was observed only if the size of the trimer was much larger than the range of the microscopic force, in agreement with what is expected from Efimov trimers. It is important to note that the results using QMC techniques pertain exclusively to the ground state properties of the trimers. This offers a different strategy from the traditional method of investigating Efimov physics by examining excited states. Although this work was restricted to the N=3 system, it would be interesting to construct analogous scaling functions for N-boson systems. An intriguing possibility is that considering the finite range effects of the interactions in these systems can restore what has been dismissed as nonuniversal behavior. § CONCLUSION In this work, I motivated and presented a few examples of the investigation of finite-range effects in strong-interacting few-body systems. This included the discussion of concepts such as the zero-range and shapeless universalities in two-body systems. These are not just theoretical constructs, as they are relevant to studying a wide range of physical systems in atomic and nuclear physics. The case of three identical bosons was also discussed, which leads to the remarkable Efimov effect. Although many works considered the effective range in this setting, I followed Ref. <cit.>, where the developed formalism allows for systematically investigating finite-range effects in Efimov trimers. Although many of the references provided in Sec. <ref> consider, in addition to N=3, small clusters (N≲ 10), there are not many studies in the literature where many-body strongly-interacting bosonic systems are investigated with low-energy universality or Efimov physics in mind. Recently, a quantum Monte Carlo study <cit.> obtained the ground-state binding energies at unitarity for bosonic clusters with sizes much larger than the interaction range for up to N=60 and bulk properties. This opens up the possibility of studying finite range effects, which stem from few-body interactions, in many-body systems. Besides being able to describe complex systems with just a few parameters, the importance of the low-energy universality is to connect fields that span several scales, from atomic to particle physics. Recent progress in modeling strongly-interacting physical systems has shown that there are universal aspects shared by all systems close to unitarity. Still, an accurate quantitative description of a particular system has to include model-dependent features. One of the most critical roles of finite-range contributions is to increase the scope of the universal behavior so that the particularities of a specific physical system are minor corrections if compared to the strongly interacting universality. I hope this work motivates studies with this goal in mind. Acknowledgements I thank the participants of the conference “Critical stability of few-body quantum systems 2023” for the fruitful discussions that inspired much of this manuscript. This work was supported by the São Paulo Research Foundation (FAPESP) under grant 2023/04451-9. § ANALYTICAL BOUND-STATE SPECTRUM OF THE MODIFIED PÖSCHL-TELLER POTENTIAL The modified Pöschl-Teller potential, Eq. (<ref>), is one of the rare cases in quantum mechanics where we can obtain analytical solutions. It can be derived from supersymmetric quantum mechanics as the supersymmetric partner of the free particle potential <cit.>. For our purposes, we are interested in seeing this potential as a smeared-out delta function, which is more convenient to implement in QMC schemes and other numerical approaches. Almost all instances of analytical solutions involving the mPT deal with a single particle in one dimension, where -∞<x<+∞ <cit.>. The appropriate boundary conditions, in this case, require that the wave function vanishes at x=±∞. In this work, I employed the mPT potential as a two-body interaction in three dimensions. The differential equations of both cases are essentially the same if we employ center-of-mass coordinates in the 3D case and solve for the s-wave (ℓ=0) reduced radial wave function u(r)=rR(r). However, the boundary conditions differ; we must have u(0)=u(r→∞)=0. I follow Refs. <cit.> whenever possible. We want to solve the equation: u”+[μ^2λ(λ-1)/cosh^2(μ r)+k^2]u=0, where k^2=2m_rE/ħ^2, in the domain 0⩽ r<∞, with the boundary conditions u(0)=u(r→∞)=0, μ>0, and λ>1. First, I perform the substitution y=cosh^2(μ r), which yields y(1-y)u”+(1/2-y)u'-[k^2/4μ^2+λ(λ-1)/4y]u=0, with 1⩽ y <∞. Next, I perform the transformation u(y)=y^λ/2v(y), y(1-y)v”+[(λ+1/2)-(λ+1)y]v'-1/4(λ^2+k^2/μ^2)v=0. I define: a = 1/2(λ+ik/μ), b = 1/2(λ-ik/μ), c = λ+1/2, where I used the usual notation, and a should not be confused with the scattering length. The differential equation can be written as y(1-y)v”+[c-(a+b+1)y]v'-abv=0, which is the hypergeometric differential equation (see Eq. (15.5.1) of Ref. <cit.>). If none of the numbers c, c-a-b, or a-b is an integer, then two linearly independent solutions in the neighborhood of y=1 exist (see Eqs. (15.5.5) and (15.5.6) of Ref. <cit.>). The general solution is given by v(y)=A F(a,b;1/2;1-y)+B (1-y)^1/2F(a+1/2,b+1/2;3/2,1-y), where F is the hypergeometric function, sometimes denoted by _2F_1, and A and B are constants to be determined. Expressing the solution [Eq. (<ref>)] in terms of the reduced radial wave function yields u(r)=A cosh^λ(μr) F(a,b;1/2;-sinh^2(μr))+ B cosh^λ(μr)(-sinh^2(μr))^1/2 F(a+1/2,b+1/2;3/2,-sinh^2(μr)). The u(0)=0 boundary condition implies that A=0. I choose B=-i so that the solution is u(r)=cosh^λ(μ r)sinh(μ r) F(a+1/2,b+1/2;3/2,-sinh^2(μ r)). Applying a linear transformation, see Eq. (15.3.7) of Ref. <cit.>, I can write: u(r)=cosh^λ(μr)sinh(μr)× [ Γ(3/2)Γ(b-a)/Γ(b+1/2)Γ(1-a)(sinh(μr))^-2a-1F(a+1/2,a;1-b+a;-1/sinh^2(μr)). . +Γ(3/2)Γ(a-b)/Γ(a+1/2)Γ(1-b)(sinh(μr))^-2b-1F(b+1/2,b;1-a+b;-1/sinh^2(μr)) ]. Since we are interested in bound states, it is convenient to take k=iκ such that the bound-state energy is E=ħ^2k^2/2m_r=-ħ^2κ^2/2m_r. The parameters a and b, Eqs. (<ref>) and (<ref>), become real, a = 1/2(λ-κ/μ), b = 1/2(λ+κ/μ). To impose the boundary condition u(r→∞)=0, we need to investigate the asymptotic behavior of Eq. (<ref>). The first term inside the square brackets diverges as exp(+κ r), while the second behaves as exp(-κ r). Hence, a normalizable solution is only possible if the first term vanishes, given that κ>0. The Γ functions in Eq. (<ref>) now have real arguments, since a and b are real. The Γ function has poles at negative integers, which can be used to make the first term vanish. There are two Γ functions in the denominator, which take as arguments (b+1/2) and (1-a). The first argument is never negative, b+1/2=λ/2+κ/2μ+1/2>0. However, I can equate the second argument to negative integers, 1-a=1-λ/2+κ/2μ=-n, (n=0,1,2,...). Solving for κ and substituting into Eq. (<ref>) yields E=-ħ^2μ^2/2m_r(λ-2-2n)^2, (n=0,1,2,... and n⩽λ/2-1), which is the desired s-wave bound-state spectrum. It is consistent with the known property of the mPT potential that unitarity corresponds to λ=2 since, for this value, we only have a zero-energy state. In Fig. <ref>, I illustrate the energy level dependence on the parameter λ.
http://arxiv.org/abs/2407.02702v1
20240702225101
Practical Guide for Causal Pathways and Sub-group Disparity Analysis
[ "Farnaz Kohankhaki", "Shaina Raza", "Oluwanifemi Bamgbose", "Deval Pandya", "Elham Dolatabadi" ]
cs.CY
[ "cs.CY", "cs.LG", "stat.ME" ]
Estimated Heating Rates Due to Cyclotron Damping of Ion-scale Waves Observed by Parker Solar Probe [ July 2, 2024 =================================================================================================== § ABSTRACT In this study, we introduce the application of causal disparity analysis to unveil intricate relationships and causal pathways between sensitive attributes and the targeted outcomes within real-world observational data. Our methodology involves employing causal decomposition analysis to quantify and examine the causal interplay between sensitive attributes and outcomes. We also emphasize the significance of integrating heterogeneity assessment in causal disparity analysis to gain deeper insights into the impact of sensitive attributes within specific sub-groups on outcomes. Our two-step investigation focuses on datasets where race serves as the sensitive attribute. The results on two datasets indicate the benefit of leveraging causal analysis and heterogeneity assessment not only for quantifying biases in the data but also for disentangling their influences on outcomes. We demonstrate that the sub-groups identified by our approach to be affected the most by disparities are the ones with the largest ML classification errors. We also show that grouping the data only based on a sensitive attribute is not enough, and through these analyses, we can find sub-groups that are directly affected by disparities. We hope that our findings will encourage the adoption of such methodologies in future ethical AI practices and bias audits, fostering a more equitable and fair technological landscape. § INTRODUCTION Fairness in data science and machine learning (ML) is indispensable for the responsible development and deployment of ethical artificial intelligence (AI) technologies <cit.>. Key tools in data science, including Aequitas <cit.>, AI Fairness 360 <cit.>, and Fairlearn <cit.> play a pivotal role in addressing fairness challenges in ML models, focusing on concepts such as demographic parity and equalizing statistics across sensitive attribute groups <cit.>. However, these approaches can lead to fairness gerrymandering, where broad fairness across high-level groups masks unfair treatment within sub-groups <cit.>. Sub-group fairness approaches <cit.> have emerged to address this, aiming to reconcile group and individual fairness notions <cit.>. Furthermore, understanding and quantifying the extent to which the observed disparity in outcomes, such as those seen with demographic parity, is attributed to the causal influence of sensitive attributes is crucial in fields, including health and social sciences <cit.>. Causality-based fairness frameworks view disparity as the causal effect of sensitive attributes S on outcomes Y, raising fundamental questions about how changes in these attributes affect average outcomes <cit.>. These methodologies revolve around a central question: if the sensitive attribute S changed (e.g., changing from marginalized group s_1 to non-marginalized group s_2), how would the outcome Y change on average? Two prominent causal frameworks, the structural causal model (SCMs) <cit.> and the potential outcome framework <cit.>, have been utilized for causal fairness analysis and more particularly to quantify the disparity <cit.>. SCMs assume that we have full knowledge of the causal graph, enabling us to decompose the causal effect of any variable into different paths, such as direct and indirect effects. On the other hand, the potential outcome framework <cit.> does not assume the availability of the causal graph and instead focuses on estimating the causal effects of treatment variables. However, a common challenge across all causal models is identifiability, referring to whether they can be uniquely measured from observational data <cit.>. This poses a critical barrier to applying these notions to real-world scenarios. Randomized experiments, considered the gold standard for inferring causal relationships in statistics, are often not feasible or cost-effective in the context of disparity analysis <cit.>. Therefore, in most cases, the causal relationship must be inferred from observational data rather than controlled experiments. This limitation has spurred a stream of research aiming to address these challenges and develop more practical and effective methodologies for causal fairness analysis. Early literature in the SCM primarily utilized linear and parametric methods, limiting its capacity to offer a universal approach for analyzing natural and social phenomena characterized by non-linearities and interactions <cit.>. Later, Pearl introduced the causal mediation formula designed for arbitrary non-parametric models, serving as a valuable tool for decomposing total effects <cit.>. Subsequently, a substantial body of literature emerged, focusing on causal effect decomposition under the rubric of mediation analysis and proposing various optimization problems to adapt the causal framework for fairness analysis <cit.>. One notable framework <cit.> addresses spurious effects in the decomposition of causal effects and explores the relationships between causal and spurious effects with demographic parity, offering practical insights for data science and fairness considerations. In the realm of fairness through causal analysis research, a significant focus lies on sub-group analysis and heterogeneity, approached from two perspectives: one being heterogeneous treatment effects <cit.>, which directly aligns with our study, and the other involving 'counterfactually fair' algorithms for individuals, a topic not directly relevant to our current research <cit.>. The former involves systematically quantifying variations in the causal impact of sensitive attributes on the outcome of interest across sub-groups <cit.>. Approaches for estimating heterogeneous causal effects encompass classical non-parametric methods such as nearest-neighbour matching, kernel methods, and series estimation, demonstrating efficacy in scenarios with a limited number of covariates <cit.>. More recently, data-driven ML algorithms including causal forest which can be adept at handling numerous moderating variables have shown promising results in heterogeneity analysis <cit.>. Building on the urgency of adopting causal reasoning techniques in fairness analysis, the main aim of this study is to leverage causal analysis for sub-group disparity assessment. First, we demonstrate the application of causal disparity analysis to uncover the intricate relationships and causal pathways between sensitive attributes and the outcome of interest in real-world observational data. Then, we close the loop by employing causal disparity analysis for sub-group fairness within the context of ML, showcasing how a causal-aware approach can enhance sub-group fairness evaluation. Our overarching goal is to pave the way for conducting disparity audits that lay the foundation for ethical and equitable ML. The novelty of this study lies not in the specific methodologies used but in recognizing causal reasoning as a novel technique for conceptualizing and quantifying disparity, making it suitable for promoting fairness in data science. * We demonstrate the application of causal disparity analysis to quantify and decompose causal pathways between sensitive attributes and the targeted outcomes within two real-world observational data. We successfully indicate the capability of our approach to uncover hidden disparities, even in cases where observed disparities are nearly zero. * We pioneer a novel sub-group discovery method rooted in the concept of Heterogeneity of Treatment Effect, enabling the identification of variations in the magnitude and direction of decomposed causal effects among individuals. * We evaluate the efficacy and utility of our proposed causal disparity analysis in a fairness ML experiment. Our method demonstrates its ability to identify biased performance within each sub-group of individuals, particularly those identified quantitatively as most affected by disparities. § MATERIALS AND METHODS In this section, we will introduce causal disparity analysis through the lens of counterfactual inference and non-parametric SCM proposed by Pearl <cit.> and expanded by Zhang et al. <cit.>. Following this approach, various causal effects can be defined as the difference between two counterfactual outcomes <cit.> along the causal pathway from sensitive attributes (causes) to outcomes. We will elucidate how these effects can be quantitatively measured and estimated from data through the experiments. §.§ Preliminaries Our study is based on a basic causal structure which consists of four random variables (Y, S, X, M) sampled from unknown distribution; S represents the random variable for the sensitive attribute (whose effect we seek to measure). Y represents the random variable for the outcome of interest. X represents the random variable for all in-sensitive attributes, including observed confounders, denoted by C, and mediators, denoted by M. The lowercase (y, s, x, m) represents the values that variables may take. As a running example, S stands for race, M stands for the job title, and Y stands for income amount. Here we consider two potential outcomes Y_s1 and Y_s2 for sensitive attributes, S=s_1,s_2. E[Y_s,m] stands for E[Y|do(S=s, M=m)] which is interpreted as the expectation of potential outcome Y when the sensitive attribute S is set to s and the mediator variable M is set to m. Sensitive attributes, S, that serve as the basis for disparity encompass a range of personal characteristics that have historically been unfairly targeted to differentiate individuals <cit.>. These attributes are pivotal in discussions surrounding equity, inclusion, and human rights and are commonly discussed in anti-discrimination laws <cit.>, regulations, and human rights frameworks around the globe. Among these attributes, a notable array includes race, nationality, ethnic origin, colour, religion, age, sex, sexual orientation, gender identity or expression, marital status, family status, genetic characteristics, or disability <cit.>. In this study, the term sensitive category denotes individuals grouped based on their sensitive attributes. The term sub-group refers to individuals grouped according to the quantity of their estimated causal effects. §.§ Causal Disparity Analysis Within the context of counterfactual fairness, the causal effect is characterized as the difference between two potential (also called counterfactual) outcomes: one outcome, Y_s_1, if the sensitive attribute is s1 (for instance, if the individual is female), and the other outcome, Y_s_2 for s2 (in this case, if the individual is not female). Due to the presence of the mediator, the potential outcomes are not only dependent on sensitive attributes but also on mediator values [4]. This way the causal effect can be decomposed into effects such as counterfactual causal effect, counterfactual indirect effect and spurious effect. The counterfactual measures of direct and indirect effects, are conditional versions of the natural direct and indirect effect introduced by Pearl <cit.> and are widely popular throughout the empirical sciences. Here we define the causal and non-causal fairness criteria we have used in this study; Total Variation (TV) also known as demographic parity represents the statistical distinction in the conditional distribution of the outcome between two groups when simply observing that S = s_1, compared to S = s_2: TV(Y) = P(Y|S = s_2)- P(Y|S = s_1) The counterfactual direct effect (ctf-DE) is the average difference between two potential outcomes when the sensitive attribute transitions from S = s_1 (female) to S = s_2 (not female), while the mediator is set to whatever value it would have naturally attained prior to the change in S = s_1, for a specific sub-group of the population, s. ctf-DE_s_1,s_2 (Y|s)= E[Y_s_2,M_s_1- Y_s_1,M_s_1|s] The counterfactual indirect Effect (ctf-IE) is the average difference between two potential outcomes when the sensitive attribute remains constant at s_1 (female), while the mediator changes from its values under s_1 to whatever value it would have attained for each individual under s_2 (not female), for a specific sub-group of the population, s. ctf-IE_s_1,s_2(Y|s)= E[Y_s_1,M_s_2- Y_s_1,M_s_1|s] According to Zhang <cit.>, direct and indirect causal effects can be linearly combined and contribute to total variation by introducing an additional term that uncovers the spurious relations between S and Y through confounding variables, X. TV_s_1,s_2(Y) = DE_s_1,s_2 (Y|s) - IE_s_2,s_1(Y|s) - SE_s_2,s_1(Y) Counterfactual Spurious Effect (ctf-SE) measures the average difference in outcome Y had S been s_1 by intervention compared to settings that would naturally choose S to be s_1. SE, in fact, measures all paths between S and Y except the causal ones (direct and indirect), ctf-SE_s_1,s_2(Y|s)= E[Y_s_1|s_2- Y_s|s_1] In order to estimate these counterfactual quantities from data, we assume the presence of unconfoundedness between the sensitive attribute and outcome, along with the assumption of conditional ignorability. Moreover, leveraging the following two assumptions (1) none of the confounders are descendants of S and (2) confounders block all backdoor paths from mediators to Y, we can express counterfactual quantities in terms of conditional distributions. §.§ Sub-group discovery for heterogeneity assessment We conduct sub-group discovery to identify and quantify causal effect heterogeneity among individuals from distinct sensitive categories (sensitive attributes are changed while keeping all other relevant variables constant). Generalized Random Forest (GRF) <cit.> is employed in this study to measure conditional distribution which is an extension of traditional Random Forest by maximizing heterogeneity when splitting nodes in a decision tree. It incorporates a statistical criterion known as the Causal tree-splitting criterion, which integrates sensitive attribute assignments and outcome variables. GRF provides estimates of both average and individual causal effects, facilitating the detection of differential effects among sub-groups. This allows for the clustering and grouping of individual effects to reveal varying causal impacts. Essentially, GRF compares individuals within a sub-group to counterparts with different sensitive attributes while aiming to closely match all other relevant attributes. §.§ Experiment and Setting We have leveraged causal and sub-group analysis for disparity analysis using the pipeline shown in Figure <ref> on two publicly available datasets, where we selected race as the sensitive attribute. We identified individuals with one race as the s_1 group and the rest as the s_2 group. Please refer to Table <ref> for more details on the attributes designated as confounders, mediators, and outcomes. Within our pipeline, we utilized the faircause library <cit.> and GRF <cit.> for causal effect estimations. Adult. The adult dataset <cit.> is a multivariate dataset designed to predict whether an individual's annual income will exceed $50,000. This prediction is based on census data and is commonly known as the 'Census Income' dataset. The data extraction was carried out by Barry Becker, utilizing the 1994 Census database. In this dataset, the goal is to identify and quantify the basic impact of individuals’ race (specifically white) on income as listed in detail in Table <ref>. HDMA. The Home Mortgage Disclosure Act (HMDA) <cit.> mandates numerous financial institutions to uphold, report, and openly divulge mortgage-related information. These publicly accessible data hold significance as they provide insights into whether lenders are effectively addressing their communities' housing requirements. They also furnish public officials with valuable information to facilitate decision-making and policy formation, while also unveiling lending trends that could potentially exhibit bias. For our experiments, we leveraged the HDMA “Washington State Home Loans, 2016" dataset comprising a total of 466,566 instances of home loans within the state of Washington. The variables encompass a diverse range of information, including demographic details, location-specific data, loan status, property and loan types, loan objectives, and the originating agency. For HDMA, we conducted two sets of experiments, referred to as HDMA-White and HDMA-Asian to explore the impact of the presence and absence of specific races on the outcome. Please refer to Table <ref> for more details. § RESULTS §.§ Causal aware disparity analysis In Table <ref>, we present total variations along with decomposed causal effects using causal forest for experimental datasets. All metrics are computed based on the difference in outcomes when the sensitive attribute transitions from s_1 to s_2. Positive results for our experiment favour individuals with s_2, while negative results favour the other sensitive group, which is s_1. Furthermore, we conducted comparisons of the causal effect estimates in our pipeline with two other widely-used causal decomposition libraries, as illustrated in Supplementary Table <ref>. Our findings reveal that within the Adult dataset, individuals from s_1 group are approximately 10.4% more inclined to obtain an annual income exceeding $50,000 than the s_2 group. Through causal analysis, we discern that about 1.5% of this 10.4% can be directly attributed to the causal influence of the sensitive attribute on the annual income (the full distribution of the ctf-DE is shown in the supplementary materials Figure <ref>). Additionally, approximately 3.2% can be attributed to an indirect effect mediated through other factors shown in table <ref>, while the remaining 6% is attributable to spurious effects. In both experiments conducted with the HDMA dataset, minimal disparities were observed through TV. Specifically, there was a mere 4% difference and nearly zero TV between the two sensitive groups in loan acceptance status. In the first HDMA experiment, featuring a 4% TV, a 5.5% direct effect from the race to loan status was noted, while both the indirect and spurious effects were negligible. However, the analysis of the second experiment (last row in Table <ref>) yielded intriguing results. Despite the absence of TV and an indirect effect, there was a 3% direct effect observed alongside an approximately 1.7% negative spurious effect from the relevant factor to loan status. Additionally, as shown in the supplementary materials (Figure <ref>), the full distribution of direct causal effects for both HDMA experiments is skewed positively in favour of White and non-Asian sub-populations. Figure <ref> presents the top attributes identified within each experimental setting for direct causal effect estimation. In the Adult dataset, key attributes include age, education, workclass, occupation, and hours per week,while for HDMA, loan amount and application income are pivotal. §.§ Sub-group analysis Drawing on the distribution of the ctf-DE, as shown in the supplementary materials (Figure <ref>), we examined the trade-off between ensuring consistency across datasets with varying distributions and maintaining intra-group alignment on ctf-DE ranges for each dataset to minimize variations. We, therefore, determined four distinct sub-groups with the summary shown in Table <ref> for categorical variables and Figure <ref> for continuous variables, across all two experimental datasets. The sub-groups are arranged from negative to positive direct causal effect values, with Sub-group 1 representing ctf-DE values less than -0.01 (negative effects in favour of individuals in s_1 category), Sub-group 2 comprising ctf-DE values between -0.01 and 0.01 (around zero effects), Sub-group 3 encompassing ctf-DE values between 0.01 and 0.05 (positive effects in favour of individuals in s_2 category), and Sub-group 4 indicating values greater than 0.05 (very positive effects in favour of individuals in s_2 category) for the Adult dataset. For the HDMA dataset, the sub-groups are slightly different due to the ctf-DE values being skewed positively. Sub-group 1 has ctf-DE values less than -0.005, Sub-group 2 ctf-DE values between -0.005 and 0.025, Sub-group 3 ctf-DE values between 0.025 and 0.07 (in favour of White or non-Asian), and Sub-group 4 indicating values greater than 0.07. For the categorical variables, we have reported the counts for the majority and non-majority categories for each sensitive (racial) group within each of the sub-groups. Evidently, there are remarkable similarities in both majority and minority counts and mean and standard deviation between the two sensitive categories within each sub-group. §.§ Applications for fairness in Machine Learning In order to assess the practical utility and effectiveness of our causal disparity analysis in ML and automated decision-making, we trained an XGBoost classifier on both datasets to predict outcomes (referred to as the outcome node in Table <ref>). We created an 80-20% train-test split using stratified sampling from all sub-groups (direct causal effect values). We computed classification results for the test set within each sub-group using AI Fairness 360 <cit.> library. Tables <ref> present the classification results, with the first row indicating the average performance and the last four rows representing the heterogeneity of performance across sub-groups. Across all experiments, performance varies among the sub-groups, with Sub-group 4 exhibiting worse performance and higher variability for all datasets except for the recall value for the Adult dataset; the Adult dataset (Precision: 0.76(95% CI interval: 0.016), Recall: 0.71(0.007), and Accuracy:0.74(0.007)) for HDMA-White (Precision:0.69(0.000), Recall: 0.89(0.013), Accuracy: 0.68(0.005)) and -Asian (Precision: 0.68(0.001), Recall: 0.86(0.011), Accuracy: 0.67(0.003). In total, the performance of the Sub-groups 1 and 4 are lower than the other Sub-groups. To better gauge the fairness of our ML classifier in our experiments and evaluate how decisions would differ if the circumstances were different, we plotted the performance gaps for the accuracy, recall, and precision between any two sensitive categories (s_2 - s_1) in Figure <ref> across all sub-groups. Notably, almost 70% of the performance gaps (positive gaps) favour the sensitive category s_2, which corresponds to the White individuals for Adult and HDMA-White, and the Non-Asian category for HDMA-Asian. As the plots indicate, the absolute value of the aggregated performance gap ranges from 0 to 0.07, whereas within sub-groups, the variation is more pronounced. The largest gaps are observed in Sub-group 1 (Precision:0.5, Recall:0.27, and Accuracy:0.29) for the Adult dataset and Sub-group 4 for HDMA-White (Precision:0.06, Recall:0.09, and Accuracy:0.06) and HDMA-Asian (Precision:0.05, Recall:0.08, and Accuracy:.04). The combination of performance gaps and lower performance in Sub-group 4 indicates the model's bias toward one of the sensitive categories. Of particular interest are the significant gaps in recall measures (higher false negative rates for one of the sensitive categories) among individuals in Sub-group 4 for both HDMA experiments. § DISCUSSION §.§ Main Findings In this study, we have demonstrated the utilization of causal disparity analysis to show the complex relationships and causal pathways linking sensitive attributes (such as race) to real-world observational data outcomes (such as loan status or income) to supplement total variation (TV) also referred to as demographic parity. Our analysis is rooted in the assumptions of a basic causal graph, from which all findings are derived. Notably, our key finding reveals a direct causal link between race and loan status or income, which might not have been apparent from the observed disparities alone. In the Adult dataset, our analysis reveals the presence of indirect effects through mediators, a phenomenon that resonates with prior research by Binkytė et al. <cit.>. However, the author's exploration of fairness measures across different causal discovery algorithms and causal paths demonstrated significant variability in the observed discrimination. Considering the presence of direct causal effects within our datasets, we delved deeper into the variability among individuals regarding how race directly influences their outcomes. This variability led to the identification of four distinct sub-groups, each sharing similar characteristics except for race. In other words, within each sub-group, all covariates except race remained consistent, with race being hypothetically randomized. The ML model used in our study showed varying performance across these sub-groups. Sub-groups with higher and positive direct causal effects, which exhibited larger disparities in outcomes attributed to race, experienced lower model performance. This performance gap within these sub-groups indicates potential unfairness and bias in the ML model, suggesting that race may be a factor contributing to disparate outcomes. In all three experiments, the larger gap in false negative rates for Sub-group 4, which is not in favor of non-whites and Asians, suggests that the classifier tends to incorrectly predict loan status as rejected when it is actually accepted among these individuals, compared to white individuals within the same sub-group. This indicates a bias in the predictions against non-white individuals. Similarly, in the HDMA-Asian dataset, there is a similar disparity where predictions are biased against non-Asians. Furthermore, for the Adult dataset, in addition to the large recall gap for Sub-group 4, there is a large gap in the true positive rates in Sub-group 1 in favour of white individuals. This implies that the classifier is more successful at correctly predicting high income among white individuals in Sub-group 1 compared to non-white individuals within the same sub-group. This suggests a bias in favor of white individuals in predicting high income. In essence, this is a nuanced finding that cannot be captured solely by dividing the entire sample size into privileged and unprivileged groups based on the sensitive attribute alone which is race in our case. Our research findings are in accordance with existing literature in two significant respects. First, employing decomposed and structural causal analysis, our results resonate with a substantial body of research delving into mediating mechanisms by estimating both natural direct and indirect effects within the potential outcome framework <cit.>. Our causal methodology experiments echo the trajectory of research pioneered by counterfactual causal fairness analysis <cit.> working on quantifying discrimination, decomposing variations, and deriving empirical measures of fairness from data. Second, considering heterogeneity in causal effects, our approach and findings align with other studies where the concept of heterogeneous treatment effects and the use of causal forest have been employed <cit.>. For instance, similar methodologies have been leveraged in analyzing environmental policy effects <cit.>, conducting cost-effectiveness analyses encompassing outcomes, costs, and net monetary benefits <cit.>, as well as in assessing educational interventions and grading discrimination <cit.>. §.§ Limitations and Future Directions As ML advances at an unprecedented pace, its societal implications have attracted heightened scrutiny. Consequently, the importance of conducting disparity analysis has been emphasized in the contemporary landscape. While this study has provided valuable insights into causal disparity analysis, it's essential to acknowledge several limitations and explore potential avenues for improvement. The analysis primarily focused on disparities related to a single protective attribute, such as race. However, this narrow focus may not fully capture the intricate interplay of multiple factors contributing to discrimination and bias in real-world scenarios. Future research should consider incorporating intersectional disparity analysis, which examines how multiple protective attributes intersect and interact to shape outcomes. In line with this, future work should also involve a thorough exploration of diverse causal discovery algorithms and identification methods. It's worth noting that the reliance solely on a basic causal graph framework in this study presents a limitation, as it may oversimplify the intricate causal relationships inherent in real-world data. Additionally, the datasets used in this study may not comprehensively represent the diversity and complexity of real-world populations. Limited diversity within the datasets can lead to biased results and may not encompass the full range of experiences and challenges faced by individuals from marginalized or underrepresented groups. Future work should involve utilizing more diverse and representative datasets, validating the findings within specific contexts, and identifying any context-specific factors that may influence fairness and bias. To conclude, our study emphasized the imperative of delving into causal pathways, decomposing them, and assessing heterogeneity among individuals. This approach not only offers a comprehensive understanding of disparities within the data but also enables targeted interventions and strategies to promote fairness and equity. § ACKNOWLEDGEMENT The authors would like to acknowledge the support from Vector Institute and its vibrant community working at the intersection of machine learning and fairness. AAAI Press 1101 Pennsylvania Ave, NW Suite 300 Washington, DC 20004 USA Telephone: 1-202-360-4062 E-mail: See the submission instructions for your particular conference or event. § ADDITIONAL RESOURCES is a difficult program to master. If you've used that software, and this document didn't help or some items were not explained clearly, we recommend you read Michael Shell's excellent document (testflow doc.txt V1.0a 2002/08/13) about obtaining correct PS/PDF output on systems. (It was written for another purpose, but it has general application as well). It is available at www.ctan.org in the tex-archive. § NATURAL EFFECTS In Table <ref>, we provide estimations of natural effects using three methods—CFA-CRF <cit.>, CFA-MedDML <cit.>, and twangmediation <cit.>—for all three datasets. § HISTOGRAM OF CTF-DE AND NDE VALUES In Figure <ref>, we provide histogram plots of ctf-DE values for the three datasets: Adult, HDMA-White, and HDMA-Asian. Each histogram provides a visual representation of the distribution and spread of ctf-DE values within each dataset. These figures provide us with the knowledge to find optimal sub-groups. In Figure <ref>, we provide histogram plots of NDE values for all three datasets as well.
http://arxiv.org/abs/2407.02999v1
20240703105857
Fermi Surface Nesting Driving the RKKY Interaction in the Centrosymmetric Skyrmion Magnet Gd2PdSi3
[ "Yuyang Dong", "Yosuke Arai", "Kenta Kuroda", "Masayuki Ochi", "Natsumi Tanaka", "Yuxuan Wan", "Matthew D. Watson", "Timur K. Kim", "Cephise Cacho", "Makoto Hashimoto", "Donghui Lu", "Yuji Aoki", "Tatsuma D. Matsuda", "Takeshi Kondo" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.str-el" ]
Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Graduate School of Advanced Science and Engineering, Hiroshima University, Higashi-hiroshima, Hiroshima 739-8526, Japan International Institute for Sustainability with Knotted Chiral Meta Matter (WPI-SKCM^2), Hiroshima University, Higashi-hiroshima, Hiroshima 739-8526, Japan Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan Forefront Research Center, Osaka University, Toyonaka, Osaka 560-0043, Japan Department of Physics, Tokyo Metropolitan University, Tokyo 192-0397, Japan Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Diamond Light Source Ltd, Harwell Science and Innovation Campus, Didcot, OX11 0DE, United Kingdom Diamond Light Source Ltd, Harwell Science and Innovation Campus, Didcot, OX11 0DE, United Kingdom Diamond Light Source Ltd, Harwell Science and Innovation Campus, Didcot, OX11 0DE, United Kingdom Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA Department of Physics, Tokyo Metropolitan University, Tokyo 192-0397, Japan kondo1215@issp.u-tokyo.ac.jp Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Trans-scale Quantum Science Institute, The University of Tokyo, Tokyo 113-0033, Japan § ABSTRACT The magnetic skyrmions generated in a centrosymmetric crystal were recently first discovered in Gd_2PdSi_3. In light of this, we observe the electronic structure by angle-resolved photoemission spectroscopy (ARPES) and unveil its direct relationship with the magnetism in this compound. The Fermi surface and band dispersions are demonstrated to have a good agreement with the density functional theory (DFT) calculations carried out with careful consideration of the crystal superstructure. Most importantly, we find that the three-dimensional Fermi surface has extended nesting which matches well the q-vector of the magnetic order detected by recent scattering measurements. The consistency we find among ARPES, DFT, and the scattering measurements suggests the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction involving itinerant electrons to be the formation mechanism of skyrmions in Gd_2PdSi_3. Fermi Surface Nesting Driving the RKKY Interaction in the Centrosymmetric Skyrmion Magnet Gd2PdSi3 Takeshi Kondo July 8, 2024 =================================================================================================== Magnetic skyrmions are topologically non-trivial particles with swirling spin texture in real space, recently interested as a next-generation physical medium leading toward future spintronic device applications <cit.>. Magnetic skyrmions were first discovered in non-centrosymmetric magnets and the consensus has been reached that the Dzyaloshinskii-Moriya interaction is the key mechanism <cit.>. However, the skyrmion size in this mechanism tends to become rather large (10 nm - 200 nm), which is viewed as the major drawback for applications <cit.>. Very recently, it was revealed from a study of Gd_2PdSi_3 that skyrmions can be generated even in centrosymmetric crystals where the Dzyaloshinski-Moriya interaction should not exist <cit.>. Interestingly, the skyrmion size of this new type is extremely tiny, less than 4 nm. Its skyrmion formation mechanism is, however, still controversial among many different ideas, such as the orbital frustration <cit.>, the geometrical frustration <cit.>, the magnetic dipolar interaction <cit.>, and the RKKY interaction induced by the Fermi surface (FS) nesting <cit.>. To solve this situation, it is crucial to reveal the electronic structure of host materials <cit.>. Importantly, the zero-field ground state is known to share the same magnetic modulations (q-vectors) as the skyrmion lattice emerging under the external magnetic field <cit.>. Hence, pinning down the direct relationship between the magnetic modulation and the electronic structure of the ground state is the most fundamental issue in elucidating the skyrmion mechanisms in centrosymmetric magnets. The previous ARPES research of Gd_2PdSi_3 was conducted 14 years ago before the discovery of the skyrmion <cit.>. The paper suggested that the magnetism in Gd_2PdSi_3 and Tb_2PdSi_3 were driven by the FS nesting. However, the nesting vector suggested <cit.> was later turned out different in direction and length from the magnetic q-vector of Gd_2PdSi_3 detected by resonant X-ray scattering (RXS) <cit.> and neutron scattering <cit.>. There are two reasons behind the confusion: Firstly, since the magnetic structures of Gd_2PdSi_3 were unknown at that time, that of Tb_2PdSi_3 they determined was used on behalf of both compounds for the discussion. Secondly, the nesting wave vector was determined from a tight-binding fit to the ARPES data; however, the fitting was rather rough due to the limited quality of data. Hence, the previous claim <cit.> is only correct for Tb_2PdSi_3, but not for Gd_2PdSi_3. Based on the current knowledge of the magnetic order in Gd_2PdSi_3, a recent theory <cit.> newly suggested that the FS nesting which drives the RKKY interaction exists in the barrel-shaped FS at the Brillouin zone center. Another important aspect discovered after the previous ARPES study is that Gd_2PdSi_3 crystals have complex superstructures <cit.> which should be taken into account. In light of these circumstances, it is vital to revisit the band structure of Gd_2PdSi_3 by the high-quality ARPES data and refined band calculations. In this letter, we present the first ARPES results for the thorough band structure of Gd_2PdSi_3 including the kz dispersion. The DFT calculations of the 2a × 2a × 8c superstructure <cit.> are also conducted for the first time. Our ARPES data shows a good agreement with the calculations, including the superstructure-induced band folding. Most importantly, we find the extended FS nesting with the same direction and length as the magnetic order revealed by the previous scattering experiments <cit.>; yet the FS location for the nesting differs from the prediction of the most recent theory <cit.>. Our results indicate that the RKKY-interaction is the formation mechanism of skyrmions in Gd_2PdSi_3. Single crystals of Gd_2PdSi_3 were grown by the Czochralski pulling method in a tetra-arc furnace. The raw materials used were 3N5 (99.95%-pure) Gd, 4N Pd, and 5N Ge. ARPES with the vacuum ultraviolet (VUV-ARPES) was performed at beamline 5-2 of Stanford Synchrotron Radiation Lightsource (SSRL) and beamline I05 of Diamond Light Source in the photon energy range from 100 eV to 200 eV. Soft X-ray ARPES (SX-ARPES) was performed at BL25SU of SPring-8 <cit.> in the photon energy range from 380 eV to 650 eV. The (001) surface prepared by a crystal cleavage in situ was measured at 10 K below the Néel temperature (TN = 21 K). The details of band calculations are described in Supplemental Material. SX-ARPES is commonly used for bulk-sensitive measurements. Figure <ref>(b) show SX-ARPES intensities at (kx,ky)=(0,0) obtained by sweeping photon energy (or kz value). A clear kz dispersion with a periodicity of 2π/c is observed, as traced by a yellow dotted line, and it determines 435 eV and 500 eV as the Γ and A points, respectively. In Figs. <ref>(c) and <ref>(e), we plot the FS mappings over a wide kx-ky sheet measured at these two photon energies. The FS sheet at Γ [Fig. <ref>(c)] lies across the Brillouin zone (BZ) boundary, whereas that at the A point [Fig. <ref>(e)] shows only circles at the zone center. These are reproduced by our DFT calculations [Fig. <ref>(d),(f)]. We find that the VUV-ARPES data also show the kz dispersion [Fig. <ref>(a)] with the same periodicity as the SX-ARPES data under the same inner potential V_0 = 17.9 eV, thus capturing the bulk information. This is further examined in Figs. <ref>(g) and <ref>(h) by plotting the band dispersion maps along the high-symmetry cuts near the A point (104 eV) and at the Γ point (130 eV), respectively. Overall band dispersions, including the bottom (yellow arrows) which corresponds to the yellow wavy line in Figs. <ref>(a), shift energetically upward as going from A to Γ, due to the kz dispersion. Around 148 eV, which is the Gd 4d - 4f resonant photoemission photon energy <cit.>, high-intensity signals appear at EF [see Fig. <ref>(a)]; this feature is advantageous to investigate the detailed Fermiology of this compound. To fully understand the band structure of Gd_2PdSi_3, the superstructure needs to be taken into account. Without distinguishing the Pd and Si atoms, the crystal structure is hexagonal as in Fig. <ref>(b). The real material has 2a × 2a × 8c superstructure [Fig. <ref>(c)] due to the ordering of the Pd and Si atoms and their systematic variation along the c-axis <cit.>. Different stacking sequences of the Pd-Si layers lead to distinct superstructure domains, but all these have equivalent centrosymmetry. The superstructure can affect the band structure in two holds: One is that it becomes a 2-fold symmetry. The DFT calculations carried out for one of the superstructure domains result in the 2-fold symmetric Fermi surface [Figs. <ref>(d) and <ref>(f)]. The ARPES data, in contrast, show 6-fold symmetry, since averaging signals from different domains. Second is that the superstructure reduces the Brillouin zone (BZ), as represented in Fig. <ref>(a) with blue lines. This effect is indeed observed as presented below. In Fig. <ref>(d), we examine the band dispersions along the high-symmetry path of Γ-K-M for the primitive BZ [red lines in Fig. <ref>(a)] which is obtained by VUV-ARPES with 130 eV photons. Here the used light polarization (linear vertical polarization) is different from that (linear horizontal polarization) of Fig. <ref>(g). Our data show the bands folded about the high-symmetry point denoted as (M) in the superstructure BZ; one of those is marked by the dashed yellow line, which is symmetric to the main band (the solid yellow line) about the (M) point. The superstructure would also reduce the BZ along kz to 1/8 times the primitive one, as represented in Fig. 1(a) by blue lines. We found that the resonant photon energy of 148 eV corresponds to the (Γ) point in the reduced BZ. This allows one to investigate the detailed FS at kz = 0, as an alternative to measurements at 130 eV where the intensities near EF are extremely weak due to the matrix element effect. Figure <ref>(e) displays the band dispersion at 148 eV side by side with that at 130 eV [Fig. <ref>(d)] along the in-plane momentum cut of (M)-(Γ)-(M) and M-Γ-M, respectively. The overall feature, including the band folding due to the superstructure, is almost identical between the two, except that the intensities near EF are much higher at 148 eV. In Fig. <ref>(f), the corresponding DFT bands at kz = 0 are overlayed on the enlarged image of Fig. <ref>(e). Although not all bands are visible in the data, a good agreement is seen between the calculations and ARPES results. The ARPES mapping at 148 eV, corresponding to kz = 0, is presented in Fig. <ref>(a). It is similar to the data of SX-ARPES at 435 eV [Fig. <ref>(c)], but with much higher quality. The FS is shaped like a windmill with six wings preferable for the nesting. This is consistent with our DFT calculations [Fig. <ref>(b)]; Note that the 2-fold symmetry due to the superstructure is absent in the ARPES data, which integrates signals from superstructure domains aligned in different directions. The nesting condition is better when the vector is directed not perpendicular to the Fermi surface (FS) [Fig. <ref>(e)] but angled by 30° [Fig. <ref>(f)]. When the nesting wave vector is perpendicular to the FS, only two sheets out of six FS sheets are connected [Fig. <ref>(e)]. In contrast, four FS sheets can be connected by a wave vector when it is angled by 30^∘ [Fig. 3(f)]. This is further justified in Supplemental Material, where we calculate the nesting function similar to Lindhard function and demonstrate that a peak indeed appears at the angled nesting vector. Importantly, this vector direction is the same as that of the magnetic order detected by the previous scattering measurements <cit.>. [Note that the nesting vector direction suggested 14 years ago <cit.> was neither cases of Fig. <ref>(e) nor Fig. <ref>(f).] Figure <ref>(c) exhibits the ARPES dispersion along this direction represented by the solid line in Fig. <ref>(a). Two V-shaped bands are observed. We estimate the nesting vector length [green and blue arrows in Fig. <ref>(c)] by Lorenzian fitting to the momentum distribution curve (MDC) at EF. The vector lengths similarly determined for various momentum cuts [arrows in Fig. <ref>(a)] are summarized in Fig. <ref>(d). We find that the length is almost constant at different kys, thus the FS is extensively nested. Most importantly, the nesting length matches well with that of the magnetic q-vector [the pink line in Fig. <ref>(d)] obtained by the scattering measurements <cit.>. In Fig. <ref>(d), we also plot the nesting vector length extracted from the DFT Fermi surface. The length for the lower-right part of the FS (circled by purple) is consistent with our ARPES results and the magnetic q-vector, further supporting our conclusion. For the lower-left part (circled by orange), there is some mismatch, implying that the superstructure sacrifices the nesting condition to some degree. We also investigate the FS nesting along kz by changing photon energy. Figures <ref>(a)-(c) show the band dispersions measured at different photon energies (141 eV, 143 eV, and 147 eV, respectively) along a momentum cut corresponding to the horizontal line in Fig. <ref>(a). The photon energies are close to the resonance photon energy (148 eV), providing relatively intense ARPES signals near EF. Similar V-shaped bands are observed for all those photon energies. As summarized in Fig. <ref>(f), the nesting vector length is estimated to be nearly constant within a certain range of kz. Interestingly, however, we find that the length around 143 eV is longer than the others. To understand this, we plot in Fig. <ref>(d) the DFT Fermi surface against kz along the horizontal dashed line in Fig. <ref>(b). The complex Fermi surface with the superstructure-induced band folding is seen along kz. The black arrow in Fig. <ref>(f) indicates the photon energy range of 138-151 eV, at which we could estimate the FS nesting length by ARPES. Figure <ref>(e) sketches the main and folded FSs (solid and dashed lines, respectively) within this arrow region. This explains that the length of the nesting vector (red arrows) peaks at the crossing point of these Fermi surfaces. This behavior is reproduced in Fig. <ref>(f), which plots the DFT calculations on top of the ARPES data. In Fig. <ref>(f), the magnetic q-vector <cit.> is also overlayed, showing a good agreement with the nesting vectors, except for the region with a peak. The extensive nesting observed in both in-plane [Fig. <ref>(d)] and out-of-plane [Fig. <ref>(f)] indicates that the RKKY interaction drives the magnetism in this compound. In conclusion, the intrinsic electronic structure of the centrosymmetric skyrmion magnet Gd_2PdSi_3 was revealed by ARPES and DFT calculations for the first time. We demonstrated the extensive Fermi surface nesting with the direction and length same as the q-vector of the magnetic order previously detected by scattering measurements. These results indicate that the RKKY interaction, mediated by itinerant electrons, is the mechanism for the magnetism in Gd_2PdSi_3. Since the zero-field ground state is known to share the same magnetic modulations as those of the skyrmion lattice, the FS nesting-driven RKKY interaction is implied to be the formation mechanism of the skyrmions with small size (< 4 nm). Our results will provide essential guidance in the material design for centrosymmetric systems yielding small skyrmions that are advantageous for device applications. Finally, we emphasize that our findings differ from the previous results in Gd_2PdSi_3 <cit.>. The ARPES study <cit.> (14 years ago) claimed the FS nesting in a different direction and length from our results and from the magnetic q-vector <cit.>. While the recent theory <cit.> suggests the nesting wave vector same as our result in direction and length, it is claimed to be located in the Fermi surface pocket around the Brillouin zone center. Such a structure is, however, not identified by either our ARPES measurements or our DFT calculations. Our data, instead, revealed that the FS nesting driving the RKKY interaction exists near the boundaries of the Brillouin zone. Acknowledgements: Use of the Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515. We thank Diamond Light Source for access to beamline I05 under proposals SI30646, SI28930, and SI25416 that contributed to our results. This work was supported by the JSPS KAKENHI (Grants Numbers. JP21H04439, JP22K03517, and JP23H04870), by the Asahi Glass Foundation, by MEXT Q-LEAP (Grant No. JPMXS0118068681), by The Murata Science Foundation, and by Tokyo Metropolitan Government Advanced Research (Grant Number. H31-1). 28 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Neubauer et al.(2009)Neubauer, Pfleiderer, Binz, Rosch, Ritz, Niklowitz, and Böni]neubauer2009a author author A. Neubauer, author C. Pfleiderer, author B. Binz, author A. Rosch, author R. Ritz, author P. G. Niklowitz, and author P. Böni, https://doi.org/10.1103/PhysRevLett.102.186602 journal journal Phys. Rev. Lett. volume 102, pages 186602 (year 2009)NoStop [Yu et al.(2010)Yu, Onose, Kanazawa, Park, Han, Matsui, Nagaosa, and Tokura]Yu2010 author author X. Z. Yu, author Y. Onose, author N. Kanazawa, author J. H. Park, author J. H. Han, author Y. Matsui, author N. Nagaosa, and author Y. Tokura, https://doi.org/10.1038/nature09124 journal journal Nature volume 465, pages 901 (year 2010)NoStop [Schulz et al.(2012)Schulz, Ritz, Bauer, Halder, Wagner, Franz, Pfleiderer, Everschor, Garst, and Rosch]schulz2012 author author T. Schulz, author R. Ritz, author A. Bauer, author M. Halder, author M. Wagner, author C. Franz, author C. Pfleiderer, author K. Everschor, author M. Garst, and author A. Rosch, https://doi.org/10.1038/nphys2231 journal journal Nat. Phys. volume 8, pages 301 (year 2012)NoStop [Romming et al.(2013)Romming, Hanneken, Menzel, Bickel, Wolter, von Bergmann, Kubetzka, and Wiesendanger]romming2013 author author N. Romming, author C. Hanneken, author M. Menzel, author J. E. Bickel, author B. Wolter, author K. von Bergmann, author A. Kubetzka, and author R. Wiesendanger, https://doi.org/10.1126/science.1240573 journal journal Science volume 341, pages 636 (year 2013)NoStop [Nagaosa and Tokura(2013)]Nagaosa2013 author author N. Nagaosa and author Y. Tokura, https://doi.org/10.1038/nnano.2013.243 journal journal Nat. Nanotechnol. volume 8, pages 899 (year 2013)NoStop [Rößler et al.(2006)Rößler, Bogdanov, and Pfleiderer]rossler2006 author author U. K. Rößler, author A. N. Bogdanov, and author C. Pfleiderer, https://doi.org/10.1038/nature05056 journal journal Nature volume 442, pages 797 (year 2006)NoStop [Tokura and Kanazawa(2021)]Tokura2021 author author Y. Tokura and author N. Kanazawa, https://doi.org/10.1021/acs.chemrev.0c00297 journal journal Chem. Rev. volume 121, pages 2857 (year 2021)NoStop [Kurumaji et al.(2019)Kurumaji, Nakajima, Hirschberger, Kikkawa, Yamasaki, Sagayama, Nakao, Taguchi, Arima, and Tokura]Kurumaji2018 author author T. Kurumaji, author T. Nakajima, author M. Hirschberger, author A. Kikkawa, author Y. Yamasaki, author H. Sagayama, author H. Nakao, author Y. Taguchi, author T. H. Arima, and author Y. Tokura, https://doi.org/10.1126/science.aau0968 journal journal Science volume 365, pages 914 (year 2019)NoStop [Nomoto et al.(2020)Nomoto, Koretsune, and Arita]Nomoto2020 author author T. Nomoto, author T. Koretsune, and author R. Arita, https://doi.org/10.1103/PHYSREVLETT.125.117204 journal journal Phys. Rev. Lett. volume 125, pages 117204 (year 2020)NoStop [Okubo et al.(2012)Okubo, Chung, and Kawamura]Okubo2012 author author T. Okubo, author S. Chung, and author H. Kawamura, https://doi.org/10.1103/PhysRevLett.108.017206 journal journal Phys. Rev. Lett. volume 108, pages 017206 (year 2012)NoStop [Leonov and Mostovoy(2015)]leonov2015 author author A. O. Leonov and author M. Mostovoy, https://doi.org/10.1038/ncomms9275 journal journal Nat. Commun. volume 6, pages 8275 (year 2015)NoStop [Paddison et al.(2022)Paddison, Rai, May, Calder, Stone, Frontzek, and Christianson]paddison2022 author author J. A. M. Paddison, author B. K. Rai, author A. F. May, author S. Calder, author M. B. Stone, author M. D. Frontzek, and author A. D. Christianson, https://doi.org/10.1103/PhysRevLett.129.137202 journal journal Phys. Rev. Lett. volume 129, pages 137202 (year 2022)NoStop [Ozawa et al.(2017)Ozawa, Hayami, and Motome]ozawa2017 author author R. Ozawa, author S. Hayami, and author Y. Motome, https://doi.org/10.1103/PhysRevLett.118.147205 journal journal Phys. Rev. Lett. volume 118, pages 147205 (year 2017)NoStop [Hayami et al.(2017)Hayami, Ozawa, and Motome]Hayami2017 author author S. Hayami, author R. Ozawa, and author Y. Motome, https://doi.org/10.1103/PhysRevB.95.224424 journal journal Phys. Rev. B volume 95, pages 224424 (year 2017)NoStop [Wang et al.(2020)Wang, Su, Lin, and Batista]wang2020 author author Z. Wang, author Y. Su, author S.-Z. Lin, and author C. D. Batista, https://doi.org/10.1103/PhysRevLett.124.207201 journal journal Phys. Rev. Lett. volume 124, pages 207201 (year 2020)NoStop [Mitsumoto and Kawamura(2021)]mitsumoto2021 author author K. Mitsumoto and author H. Kawamura, https://doi.org/10.1103/PhysRevB.104.184432 journal journal Phys. Rev. B volume 104, pages 184432 (year 2021)NoStop [Hayami et al.(2021)Hayami, Okubo, and Motome]Hayami2021a author author S. Hayami, author T. Okubo, and author Y. Motome, https://doi.org/10.1038/s41467-021-27083-0 journal journal Nat. Commun. volume 12, pages 6927 (year 2021)NoStop [Bouaziz et al.(2022)Bouaziz, Mendive-Tapia, Blügel, and Staunton]bouaziz2022 author author J. Bouaziz, author E. Mendive-Tapia, author S. Blügel, and author J. B. Staunton, https://doi.org/10.1103/PhysRevLett.128.157206 journal journal Phys. Rev. Lett. volume 128, pages 157206 (year 2022)NoStop [Hirschberger et al.(2019)Hirschberger, Nakajima, Gao, Peng, Kikkawa, Kurumaji, Kriener, Yamasaki, Sagayama, Nakao, Ohishi, Kakurai, Taguchi, Yu, hisa Arima, and Tokura]Hirschberger2019 author author M. Hirschberger, author T. Nakajima, author S. Gao, author L. Peng, author A. Kikkawa, author T. Kurumaji, author M. Kriener, author Y. Yamasaki, author H. Sagayama, author H. Nakao, author K. Ohishi, author K. Kakurai, author Y. Taguchi, author X. Yu, author T. hisa Arima, and author Y. Tokura, https://doi.org/10.1038/s41467-019-13675-4 journal journal Nat. Commun. volume 10, pages 5831 (year 2019)NoStop [Khanh et al.(2020)Khanh, Nakajima, Yu, Gao, Shibata, Hirschberger, Yamasaki, Sagayama, Nakao, Peng, Nakajima, Takagi, hisa Arima, Tokura, and Seki]Khanh2020 author author N. D. Khanh, author T. Nakajima, author X. Yu, author S. Gao, author K. Shibata, author M. Hirschberger, author Y. Yamasaki, author H. Sagayama, author H. Nakao, author L. Peng, author K. Nakajima, author R. Takagi, author T. hisa Arima, author Y. Tokura, and author S. Seki, https://doi.org/10.1038/s41565-020-0684-7 journal journal Nat. Nanotechnol. volume 15, pages 444 (year 2020)NoStop [Gao et al.(2020)Gao, Rosales, Gómez Albarracín, Tsurkan, Kaur, Fennell, Steffens, Boehm, Čermák, Schneidewind, Ressouche, Cabra, Rüegg, and Zaharko]gao2020 author author S. Gao, author H. D. Rosales, author F. A. Gómez Albarracín, author V. Tsurkan, author G. Kaur, author T. Fennell, author P. Steffens, author M. Boehm, author P. Čermák, author A. Schneidewind, author E. Ressouche, author D. C. Cabra, author C. Rüegg, and author O. Zaharko, https://doi.org/10.1038/s41586-020-2716-8 journal journal Nature volume 586, pages 37 (year 2020)NoStop [Takagi et al.(2022)Takagi, Matsuyama, Ukleev, Yu, White, Francoual, Mardegan, Hayami, Saito, Kaneko, Ohishi, Ōnuki, Arima, Tokura, Nakajima, and Seki]takagi2022 author author R. Takagi, author N. Matsuyama, author V. Ukleev, author L. Yu, author J. S. White, author S. Francoual, author J. R. L. Mardegan, author S. Hayami, author H. Saito, author K. Kaneko, author K. Ohishi, author Y. Ōnuki, author T.-h. Arima, author Y. Tokura, author T. Nakajima, and author S. Seki, https://doi.org/10.1038/s41467-022-29131-9 journal journal Nat. Commun. volume 13, pages 1472 (year 2022)NoStop [Ju et al.(2023)Ju, Saito, Kurumaji, Hirschberger, Kikkawa, Taguchi, Arima, Tokura, and Nakajima]ju2023 author author J. Ju, author H. Saito, author T. Kurumaji, author M. Hirschberger, author A. Kikkawa, author Y. Taguchi, author T.-h. Arima, author Y. Tokura, and author T. Nakajima, https://doi.org/10.1103/PhysRevB.107.024405 journal journal Phys. Rev. B volume 107, pages 024405 (year 2023)NoStop [Inosov et al.(2009)Inosov, Evtushinsky, Koitzsch, Zabolotnyy, Borisenko, Kordyuk, Frontzek, Loewenhaupt, Löser, Mazilu, Bitterlich, Behr, Hoffmann, Follath, and Büchner]Inosov2009 author author D. S. Inosov, author D. V. Evtushinsky, author A. Koitzsch, author V. B. Zabolotnyy, author S. V. Borisenko, author A. A. Kordyuk, author M. Frontzek, author M. Loewenhaupt, author W. Löser, author I. Mazilu, author H. Bitterlich, author G. Behr, author J. U. Hoffmann, author R. Follath, and author B. Büchner, https://doi.org/10.1103/PhysRevLett.102.046401 journal journal Phys. Rev. Lett. volume 102, pages 046401 (year 2009)NoStop [Tang et al.(2011)Tang, Frontzek, Dshemuchadse, Leisegang, Zschornak, Mietrach, Hoffmann, Löser, Gemming, Meyer, and Loewenhaupt]Tang2011 author author F. Tang, author M. Frontzek, author J. Dshemuchadse, author T. Leisegang, author M. Zschornak, author R. Mietrach, author J. U. Hoffmann, author W. Löser, author S. Gemming, author D. C. Meyer, and author M. Loewenhaupt, https://doi.org/10.1103/PhysRevB.84.104105 journal journal Phys. Rev. B volume 84, pages 104105 (year 2011)NoStop [Muro et al.(2021)Muro, Senba, Ohashi, Ohkochi, Matsushita, Kinoshita, and Shin]muro2021 author author T. Muro, author Y. Senba, author H. Ohashi, author T. Ohkochi, author T. Matsushita, author T. Kinoshita, and author S. Shin, https://doi.org/10.1107/S1600577521007487 journal journal J. Synchrotron Rad. volume 28, pages 1631 (year 2021)NoStop [Mishra et al.(1998)Mishra, Cummins, Waddill, Gammon, van der Laan, Goodman, and Tobin]Mishra1998 author author S. R. Mishra, author T. R. Cummins, author G. D. Waddill, author W. J. Gammon, author G. van der Laan, author K. W. Goodman, and author J. G. Tobin, https://doi.org/10.1103/PhysRevLett.81.1306 journal journal Phys. Rev. Lett. volume 81, pages 1306 (year 1998)NoStop [Gerken et al.(1981)Gerken, Barth, and Kunz]Gerken1981 author author F. Gerken, author J. Barth, and author C. Kunz, https://doi.org/10.1103/PhysRevLett.47.993 journal journal Phys. Rev. Lett. volume 47, pages 993 (year 1981)NoStop
http://arxiv.org/abs/2407.02903v1
20240703082123
"It's like a rubber duck that talks back": Understanding Generative AI-Assisted Data Analysis Workflows through a Participatory Prompting Study
[ "Ian Drosos", "Advait Sarkar", "Xiaotong Xu", "Carina Negreanu", "Sean Rintel", "Lev Tankelevitch" ]
cs.HC
[ "cs.HC" ]
Understanding Generative AI-Assisted Data Analysis through Participatory Prompting]“It's like a rubber duck that talks back”: Understanding Generative AI-Assisted Data Analysis Workflows through a Participatory Prompting Study Equal contribution. t-iandrosos@microsoft.com Microsoft Research Cambridge UK advait@microsoft.com [1] Microsoft Research, University of Cambridge, University College London UK xt@ucsd.edu The author was affiliated with Microsoft Research when this research was conducted. University of California San Diego La Jolla USA cnegreanu@microsoft.com Microsoft Research Cambridge UK serintel@microsoft.com Microsoft Research Cambridge UK lev.tankelevitch@microsoft.com Microsoft Research Cambridge UK ORIGINAL ABSTRACT § ABSTRACT Generative AI tools such as ChatGPT and Bing Chat can help users with many tasks. One such task is data analysis, which is notoriously challenging for non-expert end-users due to its expertise requirements, and where there are many potential applications of AI, such as finding relevant data sources, proposing analysis strategies, and writing analysis code. To understand how data analysis workflows can be assisted or impaired by generative AI assistance, we conducted a participatory prompting study (n=15) in which participants completed a data analysis task using Bing Chat, with prompting assistance and researcher-guided reflection. We found that generative AI benefits the information foraging and sensemaking loops of data analysis in specific ways, but that generative AI introduces its own barriers and challenges, arising from the difficulties of query formulation, specifying context, and verifying results. § ABSTRACT Generative AI tools can help users with many tasks. One such task is data analysis, which is notoriously challenging for non-expert end-users due to its expertise requirements, and where AI holds much potential, such as finding relevant data sources, proposing analysis strategies, and writing analysis code. To understand how data analysis workflows can be assisted or impaired by generative AI, we conducted a study (n=15) using Bing Chat via participatory prompting. Participatory prompting is a recently developed methodology in which users and researchers reflect together on tasks through co-engagement with generative AI. In this paper we demonstrate the value of the participatory prompting method. We found that generative AI benefits the information foraging and sensemaking loops of data analysis in specific ways, but also introduces its own barriers and challenges, arising from the difficulties of query formulation, specifying context, and verifying results. <ccs2012> <concept> <concept_id>10003120.10003121.10003126</concept_id> <concept_desc>Human-centered computing HCI theory, concepts and models</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003124.10010870</concept_id> <concept_desc>Human-centered computing Natural language interfaces</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies Natural language processing</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003456.10010927</concept_id> <concept_desc>Social and professional topics User characteristics</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003120.10003123.10010860.10010911</concept_id> <concept_desc>Human-centered computing Participatory design</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [300]Human-centered computing HCI theory, concepts and models [500]Human-centered computing Natural language interfaces [100]Computing methodologies Natural language processing [100]Computing methodologies Neural networks [300]Social and professional topics User characteristics [500]Human-centered computing Participatory design [100]Computing methodologies Machine learning < g r a p h i c s > The turn-taking phase of the participatory prompting method. (1) Mediated prompting: the participant (P, blue) expresses their intent. The researcher (R, red) formulates a prompt based on this intent and a set of pre-prepared prompting strategies, and enters the prompt into the system. (2) The participant reflects on the result, guided by the researcher, and forms their next intent, after which the study returns to step (1) for the next turn. A two panel image with diagrams showing the process of participatory prompting. [ Lev Tankelevitch ==================== § INTRODUCTION End-user tools based on generative deep learning, i.e., “generative AI” (defined in Section <ref>) can substantially improve the ability of users to analyse and make sense of data, particularly those without formal expertise or training in data analysis. Data analysis workflows are notoriously tedious, challenging, error-prone, and have high expertise requirements. Generative AI significantly advances the state of the art in facilitating the authoring and debugging of data analysis scripts, reuse of analysis workflows, comprehension of analysis scripts, learning, and exploration <cit.>. The potential change in user behaviour has been described as the generative shift <cit.>. The generative shift posits three axes of change: intensification (more sophisticated automation will be applied to existing workflows), extensification (more workflows will be automated), and acceleration (workflows which were previously costly will be applied in more contexts, as they become cheaper due to their automation). An important user scenario for the generative shift is in end-user data-driven sensemaking, that is, conducting analyses (often open-ended, ill-defined, and exploratory) within the context of some data (detailed in Section <ref>). Classic examples of end-user data-driven sensemaking include personal and corporate budgeting, financial modelling in spreadsheets, and quantified self <cit.> activities. Less conspicuous examples include travel planning, or choosing a restaurant to visit or film to watch. These involve a mixture of qualitative and quantitative information, and of subjective and “objective” criteria; to choose a film, one might consider one's personal preferences and mood, the preferences of any companions, one's reactions to the trailer, critical reviews and ratings, film duration, genre, director, cast, and so on. As previously noted, generative AI has many applications in data-driven sensemaking. It can suggest relevant datasets or analysis procedures, write data transformation and analysis scripts or spreadsheet formulae, help debug or repurpose existing scripts, suggest subjective criteria for evaluating different options, teach the user how to apply an unfamiliar statistical procedure or tool, or even act as a critic or sounding board, to help the user decompose and refine an ill-defined problem. Faced with such a breadth of applications, the key question facing system designers is therefore one of scope: where are the greatest opportunities and challenges for improving the end-user experience of data-driven sensemaking with generative AI? Our study is the first to apply the participatory prompting protocol by <cit.> to explore the opportunities and challenges of generative AI for end-user sensemaking with data. Participatory promoting is a researcher-mediated interaction between the participant and a broad, open-ended AI system, such as OpenAI ChatGPT or Microsoft Bing Chat. The latter are “broad” in the sense that they are designed to support assistance in a wide range of workflows. By virtue of being researcher-mediated, participant experiences can be grounded in actual AI capabilities, scoped down by the researcher to a particular domain (in our case, data-driven sensemaking). We further discuss the value of participatory prompting in the description of our method (Section <ref>). Our study found that generative AI supports data analysis workflows in the information foraging loop by streamlining information gathering, and the sensemaking loop by helping users generate hypotheses and develop strategies to test them (Section <ref>). However, we also found challenges to effective use of generative AI in data sensemaking workflows. These included forming effective queries, giving context to the AI, long or vague responses causing information overload, and frustrations with the verification of generated results (Section <ref>). These results provide a range of implications for design, such as assisting users build detailed prompts that contain the context needed by AI to be effective, helping users verify AI responses, and better integration with feature-rich application workflows (detailed in Section <ref>). As well as the domain-specific results, in this paper we also reflect on the value of the participatory prompting method for developing insights via mediated interaction that might otherwise remain unidentified. We discuss how it might expand to other fields of interest (Section <ref>), but also note some of its limitations in practice. These limitations include striking a balance in experimenter intervention to prevent over-influencing participant workflows, and potential inconsistencies between how researchers create and apply prompt strategies, which may reduce the reproducibility of results (detailed in Section <ref>). § BACKGROUND To clarify our guiding question, in this section we explain the concepts of sensemaking (Section <ref>), generative AI (Section <ref>), and end-user programming (Section <ref>), and summarise previous work on intelligent assistance for data analysis (Section <ref>). §.§ Sensemaking We adopt Pirolli and Card's concept of sensemaking <cit.>, which shares roots with Weick's <cit.> organizational sense-making, but is focused on data analysis rather than social psychology. Sensemaking is the process by which individuals gather information, represent it schematically for interpretation, and develop insights into its meaning to create useful knowledge products. Sensemaking involves two iterative processes: (1) information foraging <cit.> and (2) hypothesis development and testing (the latter by itself is also called the “sensemaking loop”). The sensemaking framework is heavily influential and has been applied to understand data analyst workflows in multiple scenarios, such as navigating large datasets <cit.>, and understanding unfamiliar data visualisations <cit.>. Notably, the latter study suggested that novices struggle to construct correct initial mental models (“frames”) to inform exploration, tending to persist with incorrect frames. To support sensemaking, the authors suggest that system designers should consider strategies like scaffolded introduction of visualizations or targeted annotation to aid formation of valid initial mental models. A recent study explored how novice data analysts make sense of computational notebooks <cit.>. They developed an interface called Porpoise that groups code cells and adds structured labels to support these tasks (thus implementing the scaffolding and targeted annotation suggested by previous work). A counterbalanced user study with 24 practitioners found Porpoise facilitated comprehension and supported the building of mental models compared to default notebooks. §.§ Definition of generative AI The term “generative AI” is extremely broad and encompasses many types of systems <cit.>. The term can variously refer to core algorithms (e.g., the transformer architecture), specific instantiated models (e.g., GPT-4), or fully productized systems consisting of an ensemble of models plus additional components (e.g., ChatGPT). To provide clarity around this term, Sarkar <cit.> defines generative AI as “an end-user tool, applied to programming, whose technical implementation includes a generative model based on deep learning”. The term “end-user tool” refers to tools that end-users directly interact with, not the underlying algorithms or models. The tool may consist of an ensemble of models, heuristics, engineered prompts, and interfaces. The definition is restricted to generative models based on contemporary deep learning techniques. Finally, the definition is restricted to the programming domain. Examples that fit this original definition include code completion tools leveraging large language models such as GitHub Copilot, and naturalistic language programming in spreadsheets using such models. In this paper we adopt the “end-user tool” and “technical implementation [...] based on deep learning” aspects of the definition, but rather than programming, our domain of interest is sensemaking with data. Thus, we define generative AI as “an end-user tool, applied to sensemaking with data, whose technical implementation includes a generative model based on deep learning”. §.§ End-user programming End user programming refers to programming primarily for personal use rather than public use, with the goal of supporting one's work or hobbies rather than developing commercial software. While end user programmers prioritize external goals over software quality, they face many software engineering challenges such as requirements elicitation, design, testing, debugging, and code reuse. Ko et al. provide a survey of the field <cit.>. Much end-user programming research has focused on spreadsheets. Many techniques help with authoring spreadsheets, ranging from templating systems <cit.> to programming by example <cit.>. Testing methods like WYSIWYT (What You See Is What You Test) integrate white box testing into spreadsheet use <cit.>. Debugging tools analyse formula dependencies or suggest fixes <cit.>. Other work focuses on developing higher level abstractions to facilitate reuse within spreadsheets, such as lambdas <cit.>, sheet-defined functions <cit.>, and grid-based reuse <cit.>. Previous research has variously explored how spreadsheets are comprehended <cit.>, learned and adopted <cit.>, or structured <cit.>. Sensemaking theory has also been applied to end-user programming, for example, to explain and scaffold end-user debugging strategies <cit.>. While many studies have investigated the potential of AI assistance for data analysis (which will be detailed in Section <ref>), a relatively smaller number have focused on the impact of generative AI more broadly on the activities of programming and end-user programming. Notably, no prior studies have investigated how generative AI tools can impact the data-driven sensemaking workflows of end-user programmers. In a study exploring the emerging paradigm of artificial intelligence-assisted programming <cit.>, the authors observed shifts in the workflows of programmers, away from directly writing code and toward identifying suitable opportunities for AI aid, forming mental models of when AI support benefits workflows, and evaluating AI-generated output. The challenge for programmers transforms from writing code to activities such as judiciously “breaking down prompts at the `correct' level of detail,” seen as an emerging core programmer competency. Other challenges involve constantly gauging whether any given scenario warrants AI involvement and debugging model outputs post-generation. Working with AI demands qualitatively different skill sets from programmers than previous workflows. More broadly, the theory of “critical integration” <cit.>, i.e., the effortful and conscious evaluation, repair, and integration of AI output into a partially automated workflow, appears to be representative of how AI integration affects knowledge work. An open question in end-user programming research is: to what extent people will still need to write code directly, if generative AI can do this for them from natural language prompts <cit.>? As generative models advance, the author argues, they may facilitate a significant expansion in the scope and scale of end-user programming activities. However, this “generative shift” also raises questions about the continued relevance of traditional programming languages as an interface. In confronting these questions, the author proposes the focus of end-user programming research should transition from improving formal system usage to new questions around how to design for control and explanation, while mediating user intent through natural language. §.§ Intelligent assistance for data analysis AI assistance for data analysis has long been studied under the paradigm of “Intelligent Discovery Assistants” (IDAs). Serban et al. provide an overview of IDAs <cit.>, which predate generative AI technologies and instead rely on AI planning and expert system techniques. Previous research has also considered the end-user activity of interactive analytical modelling, i.e., building machine learning models as part of data analysis <cit.>, and developed design principles for designing tools for non-experts <cit.>. More recently, AI assistance has been studied in connection with exploratory data analysis and computational notebooks. <cit.> investigate how data analysts from diverse technical backgrounds verify analyses generated by artificial intelligence (AI) systems, finding that analysts shift between procedure-oriented and data-oriented workflows. <cit.> conducted an interview study exploring the design space of AI code assistance in notebooks. Among other observations, analysts varied in their preferences in terms of the context provided to the AI system (full context or user-specified), and how assistance should be integrated into the workflow (e.g., in inline cells, in a sidebar, via pop-ups etc.). Chen et al. present WHATSNEXT, an interactive notebook environment that aims to facilitate exploratory data analysis with guidance and a low-code approach <cit.>. The tool augments standard notebooks with insight-based recommendations for follow-up analysis questions or actions. Li et al. present EDAssistant, an interactive system that facilitates exploratory data analysis (EDA) in Jupyter notebooks through in-situ code search and recommendation <cit.>. Wang et al. investigate how professional data scientists interact with a data science automation tool called AutoDS to complete an analysis task <cit.>. They observed that data scientists expressed more confidence in their manually-created models than models from AutoDS, even though AutoDS models performed better. A particularly relevant study is <cit.>, who explored analysts' responses to AI assistance that supports planning of analyses. They first identified categories of suggestions that such a system could provide, including about data wrangling, conceptual model formulation, operationalisation of constructs, results interpretation, and others. In their Wizard-of-Oz setup, participants interacted with a JupyterLab notebook and received proactive analysis suggestions from a human wizard interacting with a LLM behind the scenes (the wizard was able to observe the notebook for context). Participants' generally valued planning assistance in the form of suggestions, but found them cognitively effortful to consider. Suggestions were helpful when accompanied by commented code, provided at an appropriate time in analysts' workflows, and when matching the analysts' statistical background, domain knowledge, and own analysis plan. However, in some cases, analysts became distracted by the suggestions or over-relied on them. Researchers have also explored AI assistance from a sensemaking perspective, albeit theoretically, and not yet with empirical evidence from users. Wenskovitch et al. conceptualize how human-machine teams could facilitate AI-driven data sensemaking <cit.>. The authors propose four roles that humans may assume in such teams: Explorer, Investigator, Teacher, and Judge. Similarly, Dorton and Hall propose a “collaborative” human-AI framework for sensemaking in intelligence analysis <cit.>, notwithstanding critiques of the term “human-AI collaboration” and the collaboration metaphor for human-AI interaction more generally <cit.>. In summary of the previous work: * Sensemaking theory gives us a framework for understanding the process of analysing datasets, particularly with open-ended or ill-defined questions. It decomposes the process into a set of interdependent loops of activity, and exposes opportunities for tool design. Sensemaking theory has been applied widely to visual analytics, intelligence analysis, and aspects of software development and end-user programming. However, the broader process of data analysis by non-expert end-users has not been studied with a sensemaking perspective. * End-user programming research addresses the needs and challenges faced by people, typically non-programmers, writing programs for their own use. A particularly important site of end-user programming activity pertinent to data analysis is the spreadsheet. Numerous studies have elaborated the challenges that spreadsheet users face in learning and comprehending spreadsheets, and writing and debugging formulas. Sensemaking theory has been applied to study some aspects of end-user programming, but the potential impact of generative AI on the broader end-user activity of data analysis has not yet been studied. * Intelligent assistance for data analysis has been explored in a number of ways, such as suggesting analysis paths and automatic experimentation. Many augmentations of computational notebooks, a common site for exploratory data analysis, have been proposed. Sensemaking theory has been considered in the context of AI assistance for data analysis, but prior explorations have been theoretical. Moreover, the efforts in this space have largely been directed towards expert data analysts. Crucially, what is missing from previous literature is an understanding of the potential opportunities and challenges with applying generative AI to data-driven sensemaking workflows conducted by non-expert end-user programmers. This is the gap we aimed to fill. This research objective is incredibly broad; we cannot claim to have answered it definitively. However, our study has significantly advanced our understanding of the issue over previous work, and thrown light on new phenomena arising from the confluence of generative AI and end-user data analysis. § METHOD §.§ Participatory prompting At this stage in generative AI's development, exploratory research questions can be difficult to interrogate in ways that provide sufficient balance of ecological validity with both system access and researcher control. While generative AI systems with low usage barriers are available off-the-shelf, they can be difficult to focus on the task at hand without blockages, hallucinations, or other non-task-related issues that derail engagement. Alternatives are limited prototypes, mock-ups, or design fictions that can be too far removed from the actual capabilities of the technology, and lead to participant responses being based on an imagined caricature of AI conditioned by media narratives. The participatory prompting method, first proposed by <cit.>, aims to bridge this gap. Participatory prompting is a user-centric research method for eliciting AI assistance opportunities. The method combines principles of contextual inquiry and participatory design <cit.>, in which researchers mediate participant interactions with a real generative AI system. In a participatory prompting study, researchers first identify a domain problem and the relevant form of generative AI system. They experiment with different prompting formulations to elicit targeted responses, and then recruit participants who bring self-selected scenarios within the domain, and potentially also resources to be used. Researchers then conduct sessions in which participants work through their scenarios in multi-step turns (illustrated in <ref>). A key advantage of participatory prompting over low-fidelity prototyping and Wizard-of-Oz methods is that it grounds studies in “actually existing AI” <cit.> capabilities rather than simulations or speculative design probes. A benefit in comparison to experiments with fully functional prototypes is that it can leverage off-the-shelf AI systems with minimal engineering costs, and flexibly explore different use cases during a study, for which a functional prototype might be too constrained. Participatory prompting studies also have an advantage over some forms of purely observational studies, because by virtue of being researcher-mediated, participatory prompting can account for discrepancies in participants' a priori prompting strategies, enabling participants to be appropriately challenged while not fixating on practical problems in generative AI usage that are not relevant to the research questions. Participatory prompting may involve various kinds of researcher mediation. The format used in this study is that of the researcher-as-relay. In this form, a participant poses an open-ended query to the researcher. The researcher reformulates the query using prompting strategies, and sends this prompt to the model. The participant reviews, reflects on, and builds upon the model's response to determine their next query, guided by the researcher. The `dialogue' with the system and the participants' reflections, together with optional quantitative measures of interactions such as response satisfaction, can then be analysed. Other formats could include researcher-as-guide, where the participant directly interacts with the AI system but discusses their thought processes with the researcher. The interaction between the participant and the researcher creates valuable opportunities to elicit participant reasoning. First, the researcher can probe participant reasoning turn by turn (or sets of turns, as appropriate), to capture sequential expectations and responses. Second, when the researcher is involved in the translation of participant queries into prompts, participants may see and comment on the researcher's prompting strategies as reference point in comparison or contrast to what the participant might have done without guidance. While in some research methods this could be seen as influence or bias, in the participatory design context, this collaborative engagement on solving the problem of prompting reveals the differences and similarities between users' and technologists' assumptions, methods, and success criteria, and hence where either social or technical interventions or features are needed. §.§ Preparation The first step of the participatory prompting method is to choose a suitable functional generative AI system as a representative of AI capabilities more broadly. This involved careful evaluation of the possible alternatives. Four candidates were considered: OpenAI Playground, OpenAI ChatGPT, Google Bard, and Microsoft Bing Chat. We tested the systems by eliciting multi-stage guidance for data analysis through example queries, examining the quality and potential reception of each system's responses in a manner similar to a cognitive walkthrough <cit.>. We noted how particular design decisions in each system shaped and imposed limitations upon discourse. For instance, ChatGPT, Bing Chat, and Bard, as consumer products, incorporate “guardrails” against content considered inappropriate by the system developers, e.g., violent or sexual content. At the time of our study, Bing Chat restricted conversational exchanges to fifteen turns. In contrast, the OpenAI Playground allows more unrestrained exploration, and options for model and parameter selection. For our purposes, such constraints did not definitively preclude any options. However, the proprietary and opaque nature of commercial systems does restrict controllability, and this may render them unsuitable for some investigations. At the time of our study, Bing Chat had the unique ability to source knowledge from the Web within replies. In a data-driven sensemaking activity, this can enhance suggestions at each problem-solving phase, such as by identifying relevant open datasets, and retrieving tutorials and recommendations for tool features (such as spreadsheet formulae). We found that information from the Web significantly improved the breadth and utility of the AI responses. This outweighed other limitations, and we therefore chose Bing Chat for our study. The next step is preparing prompting strategies for the study. The challenge of developing reliable and effective prompting strategies to optimize large language models' performance has been comprehensively documented <cit.>. Users, particularly non-experts directly engaging with AI systems, struggle to devise suitable prompts to elicit high-quality responses. To overcome this limitation, the participatory prompting protocol involves the mediation of an expert researcher with knowledge and practice of prompt design, to help users formulate suitable prompts. Besides this, the mediation also helps users rapidly iterate on queries, can help users focus on the relevant aspects of the interaction and avoid distraction from incidental elements of the user interface that are not relevant to the research questions, and eliminate variations in typing speed, as the researcher relays user queries to the system, rather than the user interacting with it directly. For our study, three researchers individually experimented with developing prompting strategies for Bing Chat across four weeks, using a range of real data-driven sensemaking tasks drawn from their own personal or professional experience, including quantitative analysis of a poem text, choosing a bar to visit with colleagues, developing a spreadsheet for evaluating World of Warcraft game strategies, selecting a car for purchase, and choosing a plot type for a statistical report. These interaction logs and screenshots, successful and unsuccessful prompting strategies, error recovery methods, and other observations were catalogued in a shared repository. Through this process, we identified that despite having the capability to do so, Bing Chat did not consistently use information from the Web, render tabular data visually as a table, or attribute its sources. We developed prompting strategies through which this behaviour could be reliably induced when needed. It often provided multiple options without further support to the user for choosing between them; we developed prompts (e.g., “use information from the Web”, “cite your sources”, and “show result in a table”) to induce more such support when needed. At the end of the experimentation period, the researchers convened to negotiate and codify a list of prompting strategies and how they would be applied in different situations that might arise during the study. Despite having access to a thus carefully designed “bank” of prompting strategies, we found that in practice a lot of ad-hoc and in-situ adjustment was needed (discussed in Section <ref>). §.§ Pilot We conducted a pilot study with a convenience sample of 2 regular spreadsheet users. The pilots revealed that it can be difficult for participants to choose a suitable seed problem that is complex enough to require generative AI assistance but simple enough to describe concisely. To address this, more guiding questions were added to help participants during the problem elicitation phase. We also recommended that participants prepare a problem in advance of the study if possible. Terms such as “data-driven decision-making” were unclear to participants and had to be clarified. We found that 5-6 turns could be completed in the allotted time (45–60 minutes), eliciting detailed qualitative insights despite the small number of turns. The turn-taking phase could be extended if needed. Reflecting on responses and choosing a next step was the most time-consuming and insightful aspect of each turn. This led to us changing the Bing Chat system mode from “precise” to “creative” (the exact nature of these modes is proprietary, but the salient aspect is that the latter is more verbose and the responses typically carry more information), to give more to reflect on and help guide next queries. If early responses were generic or unhelpful, participants lost motivation. To counter this, advancement questions were added to the protocol to suggest ways forward, like rephrasing queries. Participants also tended to use short queries typical of web searches, which were more likely to result in generic responses; we added guidance to explain that longer, conversational queries were more effective. We included steps for experimenters to more deeply understand participant expectations, including desired output types, to avoid multiple incremental prompts, which while useful to study, could slow down the progress of the task and thus impair the study of more complex interactions with the AI. Prompts also needed to refer to previous outputs to maintain consistency in the system responses; we updated the protocol to include this. While not initially part of the protocol, we noted that it was useful for participants to explore and verify outputs online, thus navigating temporarily away from the chat session. Finally, we revised the protocol so that participant speculations about helpful system capabilities could be immediately tested, and barriers to sensemaking were specifically elicited. §.§ Participatory prompting sessions We conducted a study with a fresh sample of spreadsheet users (N=15, 5 women, 0 non-binary, 10 men). Participants were recruited partly via email from a database of spreadsheet users who signalled interest to take part in research studies, and partly through a recruitment consultancy firm specialising in user research with African participants. Participation was voluntary, and all participants were free to withdraw from the study at any time without penalty and without having to cite a reason. All recruited participants were compensated with a USD $50 (or local currency equivalent) gift voucher for an online retailer. Participants read and signed a consent form detailing the study format, data collection, and risks. The study method and data collection protocols were reviewed and approved by our institution's ethics review board. Participants provided demographic information (Table <ref>) relating to their experience with spreadsheets, programming, and generative AI via a survey. We directly use the spreadsheet and programming experience <cit.>, and generative AI experience <cit.> survey items, and corresponding integer coding scheme, from previous work. Participants varied in spreadsheet usage (1 beginner, 7 experienced and basic usage, 7 experienced and advanced usage) and generative AI usage (3 never used, 1 casually use, 6 occasionally use, 5 regularly use), as well as programming experience (7 never programmed, 3 novices, 3 moderately experienced, 2 experts). Participants resided in various locations (7 in Africa, 3 in Europe, and 5 in North America). The study sessions were conducted remotely using a Microsoft Teams video call, with the researcher handling the interaction with Bing Chat which was screen-shared with remote control to the participant, so that they could view and explore the results. Experimenters first elicited an example problem from the participant before entering the turn-taking phase as previously outlined. At each turn the participant was asked to read the response from Bing Chat and reflect aloud on the usefulness of the response and if anything was surprising, inspiring, or confusing. The experimenter would then ask the participant if they wanted to follow up with Bing Chat on the response, ask another question relating to the original problem, or pivot to a new problem they were interested in, thus proceeding to the next turn on the basis of the participant's response. We tried to stay neutral, passive, and open-ended in terms of affecting the topic that the participant wanted to work on (and how to follow up in each turn). However, many participants found it hard to imagine what they would want to do with Bing Chat, and then how to follow up its responses. As such, we had to be active in eliciting their thought process and moving the study forward, by suggesting options to follow up and drawing their attention to certain aspects of the output. The degree to which the researcher needed to intervene to reformulate the participants' query into a prompt varied depending on the context. At one extreme, the intervention was extremely minimal: a participant would dictate a query for the researcher to type verbatim. At the other extreme, when the participant found it challenging to articulate their need concisely, the researcher proceeded by writing a candidate query and asked the participant to confirm or disconfirm it, e.g. “does this prompt capture what you wanted to ask the system to do?” In between these two extremes, we would express their query directly, but suggest the addition of context (what columns existed in their spreadsheet), or append a prompt from our list of strategies (e.g. “output a table”, or “cite your sources”). The level of researcher intervention and the impact of query reformulation on our findings is a complex issue and we address the trade-offs in detail in Sections <ref> and <ref>. Participants worked with experimenters through several turns between Bing Chat prompt and response until the task was achieved, or the allotted time was reached. Finally, participants gave further feedback on their experience through semi-structured interviews. §.§ Analysis method We transcribed the audio recordings of participant think-alouds and interviews. One researcher initially organised participant quotes through affinity mapping <cit.> into four broad categories: remarks about interaction with AI, remarks about workflows, remarks about barriers encountered, and remarks about specific features. The organisation was negotiated with a second researcher. This categorisation was not the final analysis, but a data management step to facilitate the final analysis. The final analysis was a directed, negotiated coding between two researchers, with the aim of discovering emergent themes. We coded remarks relevant to the question of how AI assistance can support data sensemaking workflows according to the main categories of activities identified by sensemaking theory. We report our findings organised by these frameworks in Section <ref>. We coded remarks relevant to the question of how AI assistance can create barriers for data sensemaking workflows according to the iterative goal satisfaction framework, described in Section <ref>, which also reports the results accordingly. Our final analysis relied on the application of prior theoretical frameworks to supply the basis of code organisation, and thus differs from the more commonly applied inductive approach <cit.>. We were not developing a reusable coding scheme and quantitative measures of inter-rater reliability are inappropriate here. Instead, in accordance with qualitative coding best practices, the two researchers iteratively discussed their interpretation of the findings and negotiated each disagreement until it was resolved <cit.>. Our analysis focused on identifying and characterising themes, rather than on quantifying the prevalence of each in our sample. As such, it is not helpful to be concerned with the precise participant counts associated with each identified theme, although this may be inferred from the list of participant IDs mentioned under each theme. § RESULTS §.§ Overview of tasks Recall that we did not design study tasks a priori but rather developed them in a participatory manner with each participant at the start of the study using task elicitation questions. This resulted in a set of unique but highly ecologically valid tasks that were directly relevant to each participant. Participants explored a variety of data sensemaking tasks during the study, most related to their professional work, but also some personal workflows such as job searching or scheduling a pub crawl. The full list of participant tasks elicited is given in Table <ref>. Each participant's task involved key sensemaking activities when seeking assistance with different aspects of data analysis. As part of the information foraging loop, participants often began by describing their data and its format (e.g., row and column descriptions), and their overall analysis goal. Some participants even began by requesting that Bing Chat generate or find example data (this tendency is corroborated by previous studies of analysts, which have found that analysis often begins in the absence of data <cit.>). For example, P10 requested that Bing Chat provide a list of potential career paths they could follow based on their skills and experience as a History PhD candidate. When P10 found a career path that was interesting to them (archivist), they continued by requesting the requirements for that career and example open positions that they could apply to. These interactions represented data filtering and searching within the information foraging loop. As part of the sensemaking loop, most participants asked Bing Chat for help with formulating potential research questions (hypothesis generation) or strategies for analysing their data (P1-5, 8, 9, 11-15), code or formulas for a specific analysis (P1, 2, 5, 8), or step-by-step instructions for applying Excel features such as filtering and visualizations (P4, 7) (hypothesis testing). For example, P9 first asked Bing Chat about data analysis strategies using Excel for data they collected in a survey. P9 then iterated with Bing Chat to generate potential research hypotheses and analysis plans for testing them. §.§.§ Example turn-taking sessions An illustrative example of a complete turn-taking session (P1's) is described as follows. P1 wished to analyse a dataset about “cooperative behaviour in literature” they had collected. P1's first mediated query told Bing Chat they had a spreadsheet with data, where the “rows are the data for `tales' and columns contain the data for `cooperative behaviour' of a certain tale (e.g., `brother saved brother'). The query indicated that they “need a way to code each cell according to different categories, explain how to use a spreadsheet for this with an example.” Bing Chat's response confirmed its understanding, suggested options like using Excel VBA, thematic analysis, Google Sheets, and SPSS, and gave an example table and showed how thematic analysis might be applied to complete the task. P1's mediated follow-up response was to quote the fourth suggestion (use SPSS) and ask how this could be done in R with another example. Bing Chat's response explained how to use R in R Studio, and provided R code to complete the task and visualize the output. Each section of R code also contained a natural language description of the code. P1's final mediated query asked for the same code but with their specific categories in mind by asking “show me how to do it when the categories follow Hamilton's categories of biological cooperation”. P1 was satisfied with the response, and this ended the turn-taking session for this task. Another example, in brief: P6 was currently apartment hunting, so their first turn involved asking the Bing Chat to recommend ways to sort apartments based on the data they had previously collected. Subsequent turns involved recommending alternative apartments based on their criteria (by searching the Web). Finally, the participant requested Bing Chat to draft a letter to landlords to request extra information that Bing Chat recommended P6 collect, since it was missing from the spreadsheet they made. §.§ Data sensemaking workflow support Participants saw generative AI as a versatile tool that enabled various stages of data sensemaking. P11 saw generative AI as useful for “any part of a workflow”, from “starting a new project” to “preparing PowerPoint slides” for presenting the project. Several participants thought generative AI supported their workflow by making their “work easier” (P2, 4) by streamlining the search for “the desired result” (P4), adding new perspectives on how to analyse their data (P3), and “scaffolding” the solution to a task to “speed up the process” of working with data (P1). P8, a business owner, believed generative AI would “save time” and “greatly decrease cost” for many of the tasks they needed to perform. The data analyst's work process as characterised by Pirolli and Card <cit.> consists of two loops: an information foraging loop whose purpose is to identify a smaller set of relevant data out of a larger set, and a sensemaking loop whose purpose is to generate and test hypotheses. A high-level overview of this process is summarised in Figure <ref>. This overview is helpful for further delineating aspects of our participants' experiences, presented in the following sections. §.§.§ Generative AI in the information foraging loop Several participants compared generative AI to traditional search. P9 thought that generative AI workflows improved upon the information overload caused by traditional search workflows since “search engines will give you multiple results, and it's very messy, but this [Bing Chat] directly gives one thing to do.” P14 also enjoyed that generative AI output was specific to their question, while search results would require “converting the result” to your specific task. However, participants also cited concerns about the ability to apply generative AI tools to information foraging. P10 said their PhD research was “super-duper niche” and frequently required them to “travel to archives all over the world” to find data and thought generative AI would be unable to assist them for these types of tasks, because unlike textual data from the web, heterogenous archival data may not have uniform and easily accessible indices, and might be highly unstructured, mixed-media, only partially digitised, and therefore difficult or impossible for generative AI to operate over. §.§.§ Generative AI in the sensemaking loop Participants noted the opportunity to be assisted both in generating hypotheses and in identifying strategies to test them. Hypothesis generation P4 thought generative AI was useful to “have another perspective, like conversing with another person to see how their perspective is different from yours” which “could be inspiring” for their own workflow by “giving a whole new way to do a task.” Others believed generative AI was useful for brainstorming (P10) or getting unstuck by using the AI to give alternatives or options to explore (P7). P6 appreciated how Bing Chat's response considered aspects of the problem that they “didn't really get a chance to think about... so, it's good that Bing Chat was able to cover that as well.” P13 said they were “directly inspired” by Bing Chat, as it allowed them to “move further in the research analysis” by introducing methods to do their task “in a different way” than they had planned. However, some participants were sceptical of using generative AI for their creative process (P1) or forming research questions (P9), and instead saw its primary application as being for specific data analysis tasks. This could be due to concerns about personal agency in the analysis process; for instance, P9 thought that even when generative AI generated useful text, it would still “miss your own style of writing”. Hypothesis testing Participants noted how Bing Chat helped them avoid “spending ages try to figure out code”(P1) and “insightful” when offered analysis techniques they “had never thought of” (P3). Participants also liked when Bing Chat provided “step-by-step process on how to get a chart in Excel” that gave “headway on how to get the desired results” (P4), and “some kind of direction” (P5). P5 thought generative AI enabled this understanding by both “streamlining your thought process [...] with step-by-step instructions” and giving “inspiration on how you can analyse data.” P9 similarly valued the “step-by-step [instructions] on what to do” and “other possible strategies”. P12 likened generative AI to “rubber duck” debugging, an informal technique from software engineering where in order to fix a bug, the programmer explains their problem aloud to an inanimate object (archetypically, a rubber duck, hence the name) – the idea is that verbalising the problem can often trigger the understanding and insight needed to fix the bug. P12 stated, “it's like a rubber duck that actually talks back and is useful.”. This analogy highlights how, even if the AI system does not introduce new information, it may facilitate problem-solving and sensemaking by providing a channel for the reification and refinement of the user's thought process. An additional benefit to the sensemaking loop was the ability to learn new skills as part of the analysis process, which enriches the space of hypotheses it is possible to generate and test. These can be fairly straightforward technical skills, such as learning particular features of spreadsheet software. P7 had “a good learning experience” in using an unfamiliar formula. P4 similarly “initially thought you could only create bar charts with a pivot table”, but learnt from a Bing Chat suggestion that they “could just select the particular cell to create and insert the bar chart.” There is also the potential for learning broader skills. P5 saw Bing Chat's recommendations of unfamiliar functions and statistical packages as a potential “learning direction on how to go about carrying out descriptive statistics and visualizations to assist with that task.” P10 saw generative AI as a potential learning surface that assists in critical thinking, because when P10 asked for a biography of Thomas Jefferson, the response did not initially raise the problematic issue of Jefferson's slave ownership, which P10 expected. P10 reflected that generative AI could be used to explore “what kind of questions we can ask and what kind of information is being omitted”. This finding aligns with the constructivist theory of learning in interactive machine learning systems, which holds that users construct mental models of their task through iterative exposure to AI model responses <cit.>. §.§ Barriers to sensemaking with generative AI Rather than thematising barriers according to the analyst process, we found that it is more helpful to consider them in terms of a workflow we term iterative goal satisfaction. Broadly, this is the process by which a user satisfies a series of goals with AI assistance. The iterative goal satisfaction workflow is presented in Figure <ref>. The user moves through different phases: goal formulation, query formulation, and response inspection. There is an outer “goal iteration” loop as the user attempts to achieve a high-level goal, and an inner “prompt-response-audit” loop as the user attempts to achieve specific steps towards that goal. The elements of this workflow are as follows: * Goal formulation: the user reflects on their goals, needs, intents, and research questions, and identifies a need for assistance where AI could be applied. * Query formulation: the participant composes the information, context, and data that the AI might need to address a goal (in our study, the query is relayed to the mediating researcher who then further shapes it into a prompt). Query formulation can proceed directly from goal formulation, or it may be in the context of iterating on a previously identified goal, as a result of having inspected a previous response (described next). * Response inspection: the participant checks for readability and relevance to the goal. If the output is readable and relevant, the participant reads with the aim of deeper comprehension, checking quality and correctness. If the response failed any checks, participants would either reformulate their query to attempt to elicit a better response, or change their overall goal. The sequence of query formulation and re-formulation in response to deficiencies identified by inspecting the output maps directly to the prompt-response-audit cycle described by Gordon et al. <cit.>. * Response acceptance: when the AI response satisfies their goal, participants might exit the goal iteration workflow entirely (e.g., to apply the results by copying a formula into a spreadsheet, or add code to their IDE), or develop a new goal. We thus observed two situations in which participants could develop entirely new goals: either as a result of having their previous goal satisfied, or a “pivot” as a result of inspecting a response and reflecting upon it. Consequently, we broaden Gordon et al.'s prompt-response-audit loop by showing that there are two distinct reasons for exiting it, and that it is itself part of a larger goal iteration loop. With this picture of the iterative goal satisfaction workflow, we are in a better position to understand the barriers to effective sensemaking with generative AI encountered by our participants. Broadly, these fell into three categories: barriers to query formulation, barriers to the utilisation of responses, and barriers to verification and trust. We detail each of these in turn. §.§.§ Barriers to query formulation Participants faced difficulties in understanding, gathering, and expressing their request. These are difficulties they experienced in their own articulation of their needs. Detailed expression of intent Part of the challenge was in fully articulating their need. Participants had trouble “wording it in the right way that the AI understands [...] writing [what is in your head] down is the hard part.” (P1) and giving “a very explicit explanation in the prompt that is detailed” (P13), though P1 noted that Bing Chat could generate helpful responses for “convoluted” questions (i.e., prompts worded in a noticeably vague or unnatural manner). P5 similarly was frustrated by their inability to “really define the problem because there are a lot of components, a lot of things to factor in before clearly defining the problem.”; it was challenging to “be as detailed as possible when you are putting information [into a prompt, but], you can't just be lazy about it and get the most useful answer [...] you have to feed [Bing Chat] with as much detail as possible.” Such difficulties led to P9 asking “where should I learn this kind of stuff when I'm chatting with Bing Chat”. Barriers to query formulation resulted in, but also stemmed from, inadequate output from the AI, with P12 stating “it is frustrating to figure out what is it that is being miscommunicated.” P8 pointed out that “generative AI can't read your mind, so you just have to formulate your question `correctly”', and they would “be annoyed at myself for not writing the prompt correctly” rather than blame the system for an inadequate output. Other participants similarly attributed this issue to their having “communicated `wrongly' at first” (P4). P2 observed that the prompts that the experimenter wrote were “very different” than their own in that they were more specific and “direct”. P2 described their current prompting methods as “too general” in comparison, and having difficulty understanding “where to start from” when interacting with generative AI. Participants developed strategies to manage the challenge of detailed expression. Participants used follow-up prompts to “ask it specifically to focus” on a specific part of their data (P3) or on a “specific list of categories” (P4). P1 thought the solution was “just asking the right questions”, which meant being “clear and real specific in the details”, though this was challenging and left them “a bit confused.” P13's received a response localised to a different country, so they realised they should “be even more specific” about their location. P5 decomposed their queries to “streamline them to focus on things I actually need and not just suggest the entire data analysis strategy.” P12 thought they would improve their prompts by practising through “having to use it over and over again.” Others developed more ad-hoc techniques, such as avoiding acronyms (e.g., `MSFT' instead of `Microsoft') (P6), to reduce the likelihood of miscommunication. Determining and expressing context. Participants were also challenged by the need to determine what contextual information was relevant to fulfilling or interpreting their request, and then articulating it. For example, after being recommended `thematic analysis' as a way to analyse their data (which was not applicable to the kind of data they had), P1 noted that giving context (in this case, information that would enable the system to rule out thematic analysis as a plausible method) to generative AI was important for making sure AI suggestions “actually work” for the task and data. Participants drew a comparison to their experience of human-human collaboration. P12 found giving this context to generative AI was more difficult than giving it to a human co-worker, as they usually framed questions with what they had attempted previously and what went wrong before asking “what should I change?” when asking a co-worker for help. P12 felt that their co-workers are “more familiar with examples” that they would provide as context to their problem, and worried this context seemed more difficult to convey to the system. Researcher mediation of prompts occasionally impacted participant awareness of these barriers. For example, researchers asked participants for needs around the data format of the response or gathered extra context about the problem being solved, which revealed to participants the specific prompting strategies we applied. Researcher-mediated queries served as a reference point for participants to compare their own experiences in forming queries. While some aspects of effective prompting could be handled by the mediating influence of the researcher and thus “smoothed over” from the participant's perspective, as the examples above show, even with such guidance, participants are challenged by the activity of expressing their intent. §.§.§ Barriers to utilisation of responses Participants faced barriers to being able to effectively use the responses, such as an overwhelming volume of information; poorly or incorrectly formatted results; output that while not strictly incorrect was nonetheless incomplete or inadequate in some other qualitative manner; and responses which were not easily intelligible because they referred to unfamiliar concepts. Volume of information in the response Bing Chat's responses were often lengthy, likely due to our choice of using Bing Chat's “creative mode” which is designed to be more verbose. This required the participant to read several paragraphs of text. P2 experienced information overload with the text results from Bing Chat, which they originally expected to be returned as a table or spreadsheet. P3 complained about the level of technical detail in one of the responses, finding it “not easily understandable for someone who is being introduced or does not have much experience in statistics”. This points to the need for tailoring responses according to user expertise. When given different options to complete a task, P13 thought it was useful, but also “excessive information”. Excessive length also applied to generated code. P8 received “additional unnecessary code” based on what they asked for, but nonetheless believed the result to be correct. In follow up, P8 asked Bing Chat multiple times to “make it [the code] shorter”, until Bing Chat successfully reduced a 15 line function into 3 lines. Participants' preferences regarding a suitable default length and contents for generative AI output varied (P3, 8, 10-12). For example, P3 preferred a specific order of generative AI output: first the answer, then an explanation of that answer, and finally an example of how to implement it in Excel. P8 shared a preference for seeing examples and expected Bing Chat responses go beyond “just some sort of summary” by producing examples that apply Bing Chat's recommendations (e.g., showing how the A/B testing model Bing Chat generated might apply to a video advertising campaign for a company). P5 considered extra or irrelevant results from generative AI as harmful when under “tight time constraints”, as they “would not want to spend time on things that are unneeded to complete the task.” P10 wondered about balancing “how much versus how little information” that generative AI puts into a response, and how they could control this amount of information produced to their preferences. P12 expressed appreciation for responses that were “a good balance” of information “between bullet points and short paragraphs”, and “not just a two sentence answer that doesn't give any information.” Goal-satisfaction of the response. Participants could face barriers in progressing with their task if the results only repeated what they already knew and did not add any further information, or if the results were incomplete, or incorrect, or too broad. Some participants were suggested solutions they already knew about, but which could be useful for novices “unaware of these methods” (P7) or “starting from scratch” (P11). P7 requested “three more suggestions” to elicit more unfamiliar solutions. Occasionally, the model would fail to interpret very basic and clear instructions correctly. For example, P12 was surprised that the system incorrectly applied a literary analysis framework to one story (“The Glowing Coal”) when specifically asked to apply the framework to a different story (“ATU 333, Little Red Riding Hood”). P12 wondered if the data needed to complete the task was not available to Bing Chat. Participants also received incomplete responses from Bing Chat (P11, 13-15). P11 said they needed Bing Chat to provide justification for its choices. P13 and P15 both had replies that were useful, but incomplete since it failed to address every part of their question. E.g., one result was “not able to achieve the task”, since it missed out the step to “convert a column” (P13). Other participants noted that some responses were not applicable to their specific preference, but could nonetheless be helpful in other situations (P1, 3). P14 considered a response to be “just an introduction” to the topic and not applicable to their task. P4 wanted “the data to be shown in a different form.” Similarly, P9 asked for a data visualization which Bing Chat provided, but P9 instead preferred a bar chart instead as it was “much more useful than a pie chart.” Moreover, model “misinterpretations” could also function as a sort of tolerance for imprecise or incorrect querying: P11 was surprised that Bing Chat ignored part of a prompt and gave what was more likely correct when P11 tried to modify a table of object detection models produced by Bing Chat by asking it to “add a column of the platforms (e.g., iOS, Android, Raspberry Pi) supported by each model”. Bing Chat instead added a column with values like “CPU, GPU, DSP, EdgeTPU”, which P11 realized was actually what they wanted to see in the table. P11 thought that had Bing Chat provided what was originally asked for it would have been incorrect, and instead preferred that Bing Chat intervene and recommend “corrected information” like it had. Formatting of the response. Another issue that participants faced was getting responses in a useful format. For example, P11 attempted to compare popular object detection models and their characteristics so they might choose the best one, but the initial reply was a bulleted list of several models and their characteristics, which made it difficult to compare between models. P11 requested Bing Chat to produce a table that specifically compared “accuracy, speed, and size” and link to the code repository of each model. After inspecting the resultant table, P11 iterated to add columns for additional model properties. While P11 could have potentially created a detailed prompt to get a satisfactory answer with a completed table in a single step, P11 preferred to iterate and make incremental progress. Similarly, when textual results were reformatted into a table P10 thought the results were “perfect” since the original outputs were “very text heavy”, but did not originally ask for a table. Thus, P10 placed the blame on Bing Chat's vague answers on the vagueness of the question they asked. Intelligibility of the response. Finally, participants faced difficulty comprehending responses which referred to unfamiliar concepts (in a scenario where the participant was not expecting to encounter an unfamiliar concept). For example, when Bing Chat replied to a question with R functions that P5 did not know about, P5 requested an explanation of the functions and their relevance to the problem being analysed. In another example, Bing Chat recommended“Pivot Tables” to P9, which they were unfamiliar with, but P9 said they would “just ask [Bing Chat] how to use pivot tables and for examples” to learn more about unfamiliar concepts that generative AI recommends. §.§.§ Barriers to verification and trust Another category of barriers was associated with the work required to assess the reliability and validity of generative AI's output, both in specific instances of AI output but also in terms of developing a mental model for the system's strengths and weaknesses in different tasks, and an overall conception of trust in the system. Verification strategies Participants developed strategies for detecting and addressing incorrect output. To understand non-working code, P1 thought they would leverage traditional resources “like textbooks” that seemed “slightly more professional” than Bing Chat, or ask co-workers for help. A common validation strategy was to follow the inline references (P10, 12, 14). Bing Chat provides references to the URLs from which it derives its responses using footnote-style superscripts (Figure <ref>). During the study, P14 followed a reference link, then described a previous experience with ChatGPT where it could not present similar reference links which P14 wanted to save in EndNote. P9 also compared Bing Chat to ChatGPT, finding the citation feature “much better and more reliable”. Citations were seen as a fairness mechanism that “gives credit where credit is due” (P10). However, P12 found that checking citations “becomes a process of verifying all the information it's giving you, and it might have just been quicker to find the sources yourself.” P6 said that they will “have to verify” each source and “use those sources to further search”. When performing data analysis, P2 said they need to “validate that the data is from the right source”, including the timestamps and recency of the data. Source quality mattered. P7 preferred sites they “already trust”, rather than unfamiliar ones. P9 and P12 manually inspected the sources cited by Bing Chat for quality and relevance, which increased their trust of the output. P9 checked if a reference was “a scholarly article or just a website”, preferring “trustable research” publications, and inspected the publication date to ensure recency. P10 liked the citations, but if they were missing, they said they would just use traditional web search to verify the result themselves. Some participants considered the seriousness of the task when deciding how much to trust and verify the response. For one task, P10 said they “trust the results, because this is such a low stakes query.” P12 said they would trust output if it “sounded right” to them, unless they “really needed it to be right.” Verification might also involve testing and applying AI output in a different tool (P1-3, 9-11). P2 would take generated Excel formulas and “test it directly” on their data, but P1 noted that this might be challenging without first “cleaning up the data.” P3 also tested a generated formula “as an example”, and then edited it to fit their needs. P4 said when they were presented with step-by-step instructions, they would “try it, and if it's not working out, do further research” by searching online, asking colleagues for help, or watching video tutorials on YouTube. P9 had a similar approach to generated SPSS code: they first ran the code on test data to “see if it makes sense”, before applying it to their dataset. P13 stated that “the only way to know the code is correct is to put it into an IDE.” P3 worried about errors, which decreased their trust of generative AI, saying that they “don't rely on it [generative AI]” and always rigorously verified any generated formulas. Hallucinations Hallucination, defined as “generated content that is nonsensical or unfaithful to the provided source content” <cit.>, limited generative AI's usefulness for data analysis for our participants, because it was difficult to detect (especially when the hallucination is about a domain in which the user is not an expert) and time-consuming, requiring careful and detailed attention to every part of the output. For example, P9 stated that they would not use it for literature reviews because of this risk. P6 felt a burden of “always having to double-check and read every line” of the response. P11 said that while their “personal strategy is to verify everything”, it was time-consuming and “not always possible or feasible” to do so. Moreover, P5 described difficulty in verifying generative AI output for domains they did not “have a strong understanding in”. When Bing Chat started hallucinating data for P6's task, P6 said they started to “understand when you should use [generative AI] and when you shouldn't”. P6 subsequently formed a belief that Bing Chat was not able to index copyrighted media like books, and stated that the system ought to “say `I cannot access this book or its chapters' rather than continuing to make things up”. P11 had an experience of “generating a function in the code that looked very authentic, but didn't exist.” To mitigate the impact of such hallucinations, P11 aimed to “always verify all information”, but noted that users who “blindly trust these AI tools can easily be misguided” by hallucinations. P9 suggested that “more specific prompts to focus on a specific topic” might address hallucinations. §.§ Explicit feature speculations Participants on some occasions explicitly speculated about features that would help them with sensemaking. In the traditions of HCI research deriving from sociology and cognitive psychology, study participants are not conventionally involved in the direct design of products, and as such, explicit feature requests and speculations are treated as potential evidence of a deeper underlying need which may or may not be best satisfied by implementing the feature requested. On the other hand, since we are invoking the participatory design tradition <cit.>, we are explicitly interested in participant's design speculations and consider them as first-class design contributions, at face value. We report these feature speculations in this section. Application integration Some participants saw a need for integration with the data applications they already used (P1, 2, 7, 11-13). For P13 to “be comfortable in the analysis flow of using generative AI, it would be integrated in whatever system being used on the side, and not taking up the whole screen.” P7 thought that if they could “do it all in just Excel”, the generative AI to have access to charts and data within the spreadsheet, reducing the effort of “going between different tabs”. Further, P11 said that their analyses frequently ended up in slideshow presentations, so they wanted the generative AI to leverage features of one app (Excel) and place them into the final app (PowerPoint). Similarly, P6 wanted to go from chat to spreadsheet by having Bing Chat create a spreadsheet for them, avoiding a “very manual” process of creating spreadsheets by iterating on “what categories should be included and filling out information” (P6). P12 believed that app-specific generative AI would “save a lot of time spent procrastinating” such as “going down Wikipedia rabbit holes” (e.g., exploring various related topics that are not critical to solving the task at hand). However, P1 enjoyed the broad possibilities of a general generative AI chat and worried that when leveraging generative AI within an application, the AI might limit their suggestions to operate within that application, even if a better solution might exist in another application. Instead, P1 thought that both in-app generative AI and a general generative AI would be useful, where when the in-app generative AI failed to accomplish the task, the general generative-AI could act as “the big boss who is like `alright, we'll sort this out”'. Context We previously noted that providing context was a challenge (part of the larger set of barriers to query formulation, Section <ref>). Several participants offered suggestions for sharing context more easily. P1 desired a way to easily include topics and keywords of interest. P4 wanted to give negative examples, to showing what “does not exactly fit into what is wanted” to tune the responses to be “more specific” to the goal. P1 wished to upload their entire dataset and “have it [the AI] go”. P5 wanted to upload “particular columns” of their dataset as context for questions like “what can I do with this particular column” rather than getting “generalized responses”. P11 described the need for chat histories they could revisit and reuse “after months” away, to “pick up where we stopped last time and continue from there without redoing everything I did before”. P7 wanted to go further and share chat histories with others, which might help collaborators understand the provenance of some analysis activity. Formatting and modality Participants saw a need for better intelligence and flexibility in output formatting. For example, P10 desired the data they received from Bing Chat to be in a table, which Bing Chat was able to provide after re-prompting. P10 then thought the data was “organized nicely and not overwhelming” and could be exported easily to other applications. P11 ran into a similar issue while comparing two paragraphs, noting that placing the data in a table and comparing the columns would be “more useful” than reading each in sequential text. Participants also described how generative AI might go beyond text and into other modalities (P4-11, 13). Several participants saw videos, imagery, interactive maps, and other visualizations as improvements over purely textual output (P4, 10, 11, 13) depending on the problem being solved (P11). Video tutorials could provide “further clarity to see the step-by-step process” (P4). P13 described example visualizations provided by Bing Chat as inspirational examples for how they could themselves visualize their data. However, P13 worried that visualizations could also be distracting and take user attention away from the text. On the input side, P8 also wanted to provide images and videos as part of a prompt to provide context to Bing Chat, instead of just providing text. P11 suggested that voice interaction would feel “more natural, like talking to a human assistant.” Anthropomorphisation and social cues Some participants reflected positively on Bing Chat's ability to use emojis and seem “friendly” (P10, 13). P10 noted it was a “almost human reaction” and said it was “nice to feel like you're talking to some sort of person or feel kind of happy [...] like texting a friend”. However, P9 thought this style of reply “felt strange” and was “confusing” for them in the context of doing work with Bing Chat, since they felt like they had to make conversation with the chatbot rather than just getting answers from it. § DISCUSSION §.§ Connections with related work How generative AI conversations compare to search workflows. Participants in our study compared generative AI to traditional search workflows, finding that the linear, summarised and aggregated nature of Bing Chat responses required less effort in comparison to manually viewing multiple search results and developing a mental summary oneself (Section <ref>). The consumer-facing positioning of the Bing Chat interface is as a complement to the more traditional Bing search engine, so to some extent this comparison is a natural one to draw, but other studies have also noted the comparison to search engines even in interfaces without such associations. For instance, studies of language model assistance in programming through code completion tools such as GitHub Copilot also find that participants cite a reduced effort in comparison to manual web search as a benefit of these tools <cit.>, though there are also drawbacks: due to the limited scope of sources and generation formats, language model interfaces generally offer a less media-rich experience, with fewer opportunities for learning and tangential exploration, and with fewer cues about the provenance of the results. A related observation from our participants is that search results for data analysis workflows require further work in order to adapt to the task at hand, whereas generative AI can often perform part or all of the adaptation needed. This benefit has also been observed in previous studies <cit.>, and it is an important benefit given that many end-user data sensemaking workflows involve such search and adaptive reuse of resources on the Web (i.e., “transmogrification” <cit.>). Generative AI and creativity in data sensemaking. Participants generally valued the creative potential of Bing Chat for ideation and the generation of alternative perspectives, though some participants stated a preference for first ideating and forming research questions privately (i.e., without generative AI assistance) and only using generative AI for specific data analysis tasks (Section <ref>). At least one participant was concerned about the preservation of personal voice and style when using AI-generated text. This mix of optimism and caution has been reflected in multiple other fields, such as programming, creative writing, and visual art <cit.>, where similarly, some aspects of creativity can be usefully attributed to the AI system, and AI can be viewed as a potential source and enhancer of creativity, but there are still important roles for humans to play, as curators, as editors, as critics, and as integrators. Generative AI and common ground A key set of challenges faced by our participants revolved around understanding and providing the context needed by Bing Chat to address their request (Section <ref>). Participants explicitly drew a comparison to interacting with human colleagues, where interactions were simplified to due to the vastly greater degree of shared implicit context, some deriving from the shared domain of work, others from the broader shared experience of culture and language. A concept from linguistics that captures this is the notion of common ground <cit.>, the set of contextual presuppositions held by interlocutors that allows any speech acts to be performed and interpreted at all, without devolving into an infinite regression of “but what does that mean?”. Human users and generative AI do share a certain amount of common ground (deriving from the fact that generative AI behaviour is stochastic replay of real human behaviour <cit.>), but the quality of this common ground in our study was perceived as both alien and inferior to that shared between human collaborators. This aligns with the conclusions of <cit.>, who suggest that AI assistance should be grounded in an understanding of users' current analysis plan, statistical and domain background, and overall goals; likewise, users should understand the goals of the AI assistance (e.g., to help with analysis execution, high-level planning etc.). Researchers have thus proposed to investigate how design might facilitate the notation and sharing of such contextual information without burdening the user <cit.>, but to our knowledge there are no compelling solutions, and this is one of the trickier open challenges for interaction design of generative AI. Folk theories and external influences When confronted with a response that did not fit their needs or expectations, participants usually proceeded by developing a hypothesis about why the model had responded in the way it had, and adapting their next prompt accordingly, including specific strategies such as using full names of entities rather than abbreviations (e.g., “Microsoft” and not “MSFT”), despite not necessarily having evidence that such hypotheses were correct, or that such strategies would be effective (Section <ref>). This echoes findings from other studies such as Liu, Sarkar et al. <cit.>, who found that participants drew from a wide range of linguistic influences, from web search to programming languages, to inform their hypotheses about how to prompt the AI system effectively. Due to the stochastic nature of generative AI, these hypotheses and consequent prompt refinements can very well produce an improved result, affirming the participants' mental model. Over time, this may result in the development of folk theories <cit.> about prompting and behaviour of generative AI that may not necessarily be reliable. Anthropomorphism of Generative AI in data sensemaking Bing Chat is mildly anthropomorphised and frequently introduces emoji into its responses. Some participants noted this as a benefit as it improved the collegiality of the interaction, while others felt that it introduced an unwarranted expectation of politeness, verbosity, and conversationality on the part of the user (Section <ref>). This is also reflected in other studies of anthropomorphism in AI, which have found that introduction of human-like features can help users be more forgiving of a system that makes errors <cit.>, and improves its perceived likeability, but can be counterproductive for a system with high performance and focus on task completion <cit.>. It is unclear from our findings whether there is a single correct approach for data sensemaking, which includes a blend of activities, some of which the system may be able to perform with high accuracy, and some not. More likely, the suitability of anthropomorphising features such as emoji appears to be dependent on the context and individual preferences. Iteration and incremental progress We noted that participants iterated with Bing Chat to incrementally build up an optimal response (Section <ref>), by issuing a series of prompts to slightly refine the previous response, as opposed to building up a single detailed prompt to satisfy all the requirements. This tendency to favour incremental progress has been noted in multiple previous studies of end-user interaction with AI in spreadsheets (e.g., building up a complex result through a series of intermediate columns <cit.>, or incrementally training a machine learning model through an “edit, learn, guess” loop <cit.>). This preference for incremental interaction is similar to the motivation for direct manipulation interfaces and their property of being “rapid, incremental, and reversible” <cit.>, and might be the result of the same cognitive factors that underlie the success of the direct manipulation paradigm. However, more research is needed to understand whether this is the case, and if so why, since it would appear to contradict the well-documented tendency of end-user programmers to favour the shortest path to their goal. The burden of verification Participants found that manually verifying sources was burdensome, and in some cases the work of verifying a response might be greater than the work required to conduct a web search manually (Section <ref>). The increased burden for users to check content has been observed in several studies (e.g., <cit.>). One approach to resolving this is “co-audit”, where AI tools themselves can help to check AI-generated content <cit.>. What co-audit tools might look like in the context of the diverse range of data sensemaking workflows is an open research question. Expertise and over-reliance Recall that participants varied in spreadsheet usage (1 beginner, 7 experienced and basic usage, 7 experienced and advanced usage) and generative AI usage (3 never used, 1 casually use, 6 occasionally use, 5 regularly use), as well as programming experience (7 never programmed, 3 novices, 3 moderately experienced, 2 experts). In making data analysis more accessible to a wider range of non-experts through generative AI, over-reliance may become an unintended consequence (a review of the literature on AI over-reliance is given by <cit.>). We observed multiple phenomena during our study that could contribute to over-reliance, such as AI-generated output referring to concepts unfamiliar to end-users, and verification fatigue. While mitigating over-reliance was not within the scope of our study, multiple approaches have been explored such as explanations <cit.>, cognitive forcing functions <cit.>, and encouraging critical thinking <cit.>, to create appropriate reliance <cit.>, which is important to consider in future work. Metacognitive demands of generative AI Several of the issues that participants encountered align with what has been termed the `metacognitive demands' of generative AI <cit.>. These are usability issues that reflect a need for users to have a degree of self-awareness, task decomposition, and well-adjusted confidence in their own abilities when working with generative AI systems. For example, participants struggled to formulate prompts because it was difficult to verbalise what was in their mind and break down their overall goal into sub-goals for the AI system to address—i.e., difficulties with self-awareness and task decomposition, as described and observed in other studies <cit.>. Moreover, some participants found it difficult to disentangle their prompting ability from the AI system performance when certain interactions went wrong, suggesting a challenge with calibrating one's self-confidence in working with the system, as also observed in prior studies <cit.>. Participants' comments touched upon the role of self-confidence in verifying outputs, particularly for domains in which they have little expertise, as also observed in previous work <cit.>. In some cases, this was magnified by the volume of information in generated responses. Conversely, some participants implicitly noted how the AI system provided them with metacognitive support, as outlined in <cit.>. For example, participants commented how the system helped them think in a “step by step” manner, reflecting support with task decomposition. They also noted how the alternatives suggested by the system acted as inspiration when they were stuck, suggesting benefits to their metacognitive flexibility, analogously to that observed in <cit.>, which used human guides to support users co-creating with generative AI. These observations suggest that there are opportunities to design systems which explicitly provide metacognitive support to users as they approach a task, formulate prompts, and evaluate system outputs. §.§ Implications for design Interaction design can support generative-AI assisted data sensemaking workflows (Section <ref>) by addressing barriers discovered in our study (Section <ref>). For query formulation: Participants had challenges in conveying their goals and context to generative AI. These led to irrelevant, unhelpful, or partially helpful responses that required iteration to improve. This might be addressed by a design that helps a user build more detailed prompts, e.g., proactive questions that the system provides for the user to respond to (i.e., a form of metacognitive support <cit.>). Ambiguous or missing context could be detected and flagged before producing a response to avoid low quality responses. Output formats relevant to the user's request could be recommended as prompt addenda. For example, if the user asks how to perform a specific data analysis workflow, “step-by-step instructions” could be suggested. This could help users improve and calibrate their confidence in their prompting ability. Designers could also explore restricted vocabularies and grammars (as opposed to unrestricted natural language queries) <cit.>, or techniques such as grounded abstraction matching <cit.> to help users develop clearer mental models of effective querying styles. For response inspection: Participants also spoke about a need to verify generative AI responses for correctness, quality, and hallucinations. To do this, they inspected references provided by Bing Chat or testing code and formula suggestions. However, user expertise plays a major role in detecting incorrect output (a similar role for user expertise was observed in <cit.>). Further, verification was effortful and time-consuming. Therefore, users need verification assistance, e.g., through co-auditing features <cit.>. The system might share strategies with the user for identifying high quality references, or assist with specifying which types of references are suitable for supporting a particular response. To assist users in verifying AI-generated code or formulas, the system might generate and run tests to help detect failure cases. This would speed up iteration on coding tasks. In some cases, users may not be appropriately calibrated relative to their own expertise, potentially leading to over-reliance (e.g., as in <cit.>). Thus, as suggested in <cit.>, there is scope for systems to prompt users to consider their own expertise and whether additional verification assistance might be helpful. For goal formulation: Participants in our study used generative AI to help them think about their data by having Bing Chat provide potential research questions or alternative analysis strategies. However, it can go further by helping users critically think about their data-driven decisions. For example, when a user asks for AI assistance to recommend a data analysis task, the system could accompany its recommended approach with a critique of that approach outlining its potential limitations. This might prevent overreliance on the initial recommendation. The system could identify when a user's data might not be able to answer the questions they are asking, and recommend data collection strategies that would enable them to do so. To this end, <cit.> suggest that, alongside an `analysis execution' mode, AI assistance can enter a “`think' mode for specific planning suggestions, a `reflection' mode for connecting decisions and highlighting potential missed steps, and an `exploration' mode for higher-level planning suggestions”. A step further would be to help users realise that they may not yet have a clear problem or hypothesis in mind. For example, systems can surface self-evaluation notices that encourage users to reflect on their broader aims and help them in clarifying and scoping them into concrete goals <cit.>. For streamlining workflows: Previous research has noted the challenges of cross-application workflows, particularly when using feature-rich software termed praxisware <cit.>. Participants described a desire to integrate generative AI within the feature-rich applications they already use, rather than a separate experience which requires context switching between generative AI and application. This integration could help provide much of the context that our participants had trouble elaborating, as the application state already contains much of the context relevant to the task. It may also address issues with responses containing unfamiliar concepts, features, and programming languages. However, some participants were wary of this type of integration and saw it as potentially limiting the recommendations that generative AI could provide. For example, a question asked within R studio would produce methods and code suited for doing data analysis in R, but there might be more effective strategies in other applications (e.g., Excel) that might not be provided. This limitation could be circumvented if application-specific AI systems were able to delegate queries to other applications when appropriate. §.§ Implications for AI research So far we have discussed design opportunities to improve the user experience of generative AI-assisted data analysis. This section discusses current technical developments that could positively impact the underlying issues, describe remaining gaps, and hypothesize why some issues might be addressed with foreseeable advances in technology. In the user journey, writing the first prompt is a significant step and our study shows that there are several issues that make query formulation difficult. Several approaches have been investigated, such as improving user prompts automatically <cit.> (including commercial solutions[e.g., <https://www.junia.ai/tools/prompt-generator>]), methods to better select prompt templates <cit.>, prompt banks[e.g., <https://github.com/f/awesome-chatgpt-prompts>] and prompt documentation[e.g., <https://platform.openai.com/docs/guides/prompt-engineering/prompt-engineering>]. A less explored avenue relates to tuning prompts such that the output is not only correct, but aligned with the users' goals. There are secondary goals when users pose a question such as learning or brainstorming (as identified in our study) and more research is needed on supporting users to write prompts that produce outputs aligned with personal goals. In our study, users also observed the importance and challenge of providing context. As Large Language Model (LLM) providers are continuously expanding model prompt windows (over 100,000 tokens in some cases), one might imagine that just by automatically ingesting more aspects of the user's work (e.g., the content of files on the user's filesystem, messages to collaborators, etc.) and passively relaying these to the model, we might be able to solve the context problem. Alas, several studies have shown that models struggle to identify the relevant portions in large prompts, and methods such as RAG (retrieval augmented generation) have been proposed <cit.>. The problem worsens when the context is not inherently textual; for example, when the task needs structured knowledge via (complex) tables or knowledge bases. Despite much research effort, current evaluation still shows a significant performance gap <cit.>. Users identified that generative AI can provide useful and diverse responses: new datasets, complex logic, general knowledge, and inspirational ideas. Unfortunately reliability is an issue and hallucinations, or even worse inconsistent hallucinations (similar or same prompts sometimes resolve successfully, other times produce incorrect outputs), are a significant problem. Researchers have explored how to improve detection <cit.>, and counteract hallucinations by grounding in verified sources <cit.>. No current approach can guarantee that the results generated by an LLM are correct, and research is moving towards building tools and agents that can support users to validate outputs <cit.>. This work is still at an early phase, but can draw from large bodies of related research such as verification, scientific reviews, and design critiques. An interesting technical challenge is to develop an approach that lets us predict whether a generation is likely to be correct. Because LLMs are typically optimised for next-token generation, this might require significant architectural changes. Nonetheless, this would open the door to better feedback integration in LLM generations. §.§ Expanding the Participatory Prompting method to other fields of interest Our research approach takes its name and inspiration from the participatory design tradition <cit.>. That being said, the domain of data sensemaking to which we have applied it has very specific requirements that may not generalize to all use cases of the highly-flexible technology of generative AI. We believe that the method can be extended to other domains, and here make five suggestions for fundamental aspects of the method that researchers should consider. Level of researcher intervention: The nature of what participatory design will find depends on the interrelationship of the maturity of the technology being investigated and the level of domain expertise of the participants, but, crucially, mediated by the nature and level of activity and intervention of the researchers. Researcher mediation is a necessary part of participatory design because the approach is fundamentally about helping end-users find agency in a context of uncertainty around technology design. Researchers may be able to take the role of a passive conduit when the participatory design process is needed to enable access to a technology that is otherwise out of reach of end-users. However, when the technology or its application are very new or involve high levels of uncertainty, researchers may need to be an active helper for participants to articulate, enact, and reflect on their own needs. This is particularly valuable in the context of generative AI, where researcher involvement enables richer, in-the-moment collection of participant data at the level of individual prompts, rather than post-hoc recollections obtained after task completion. In Sections <ref> and <ref> we describe the nature of our participatory prompting sessions, and how we tried to stay passive – and in some cases could – but often had to be more active. The more active researchers are, the more potential there is for introducing biases, but this needs to be balanced against getting reasonable results when participant uncertainty is high, and also balanced against the spirit of enabling end-user agency that is central to participatory design. As such, it is important to plan, document, and account for how active researchers need to be and actually were, so that the results can be calibrated against others in the future. End-user agency and ascribing agency to generative AI: Related to the first point and researcher intervention, is that in participatory prompting, end-user agency must be more than an issue of `just' giving users a voice in the design process. The joint agency of people and systems in participatory prompting needs to be carefully planned for, documented, and accounted for when the technology itself is generative. That is, while researchers guide participants to see how generative technology opens up pathways for tasks hitherto difficult or impossible for them, researchers also need to guide participants on unpacking their agency in the process and track where participants ascribe agency to the technology (as our participants sometimes did in discussing how Bing Chat was part of the sensemaking loop in Section <ref>, and anthropomorphising Bing Chat in Section <ref>). Ecological validity: Ecological validity is the extent to which a study mimics a real situation and its findings can be generalized outside the research setting <cit.>. While this is an issue in all research, it has a scale of relevance to participatory design depending on the domain of interest and nature of the technology. In participatory prompting studies, beyond researcher intervention mentioned above, two key aspects affecting ecological validity are: the use of participants' own materials as resources for the generative AI system, and, relatedly, the persistence of both resources and generative AI outputs across time and across technology surfaces (as noted by participants at the beginning of Section <ref>). To get meaningful results, researchers will need to decide in advance how they will represent to participants the nature of ecological validity of the participatory prompting exercise and its use and persistence of resources. Users in groups: Related to the third point, our study focused on one human using one generative AI system, such that the researcher was a facilitator of an individual participant's work. However, future participatory prompting studies will likely need to extend to participants acting in groups, and potentially even a hierarchy of groups (e.g. a team, the group the team belongs to, and the organisation that comprises the groups). This will entail decisions around whether participatory prompting will require exploration of each individual in a group having their own personal generative AI experience that they use in parallel to contribute to a wholly human group experience, or the group having one shared generative AI system that all can see and access serially or even some combination of both. While such group action is quite common in traditional participatory design studies, such group action maybe outside current capabilities of generative AI systems (especially group action across time and technology surfaces), necessitating some combination of real and speculative usage (or increased design fiction or Wizard-of-Oz engagement). It may also require one or more complex meta-prompts for the generative AI system so that it can (appear to) act on behalf of groups or even whole organisations. These prompts will need to be carefully designed not to misrepresent both what is feasible and what is desirable in such situations. Domain of interest and expected outcomes: Our study focused on sensemaking from data, which will naturally only account for a proportion of the possible workflows for the flexible technology of generative AI. The method can clearly be extended to paradigms outside data sensemaking, such as artistic creativity, idea synthesis, personal reflection on goals, Socratic dialogue, educational testing and explanation, therapeutic discussion, team project planning, and more. While some of these (e.g. education) have empirically factual outcomes that users and researchers alike could agree on, others will have outcomes more related to personal satisfaction (e.g. therapeutic discussion) or shared satisfaction (e.g. the results of a creative output), potentially some combination of both factual and satisfaction outcomes, and personal and shared outcomes (e.g. the output of a team project plan). When extending the method, then, participants and researchers need to be clear about how the nature of the domain of interest is related to the nature of preferred and expected outcomes. This is especially important given the generative AI issues around non-determinative outcomes and the potential for hallucinations, as participants voiced concerns about in Section <ref>. Such issues will be more relevant to some domains more than others. For example, verification of sources will be crucial in some domains (e.g. information analysis), others may have no sources to be verified (e.g. creative expression), the source for some will be the participant themselves (e.g. articulating and synthesising rough ideas into a coherent draft), and the `sources' for others will be the stochastic patterns of apparent human behaviour output by the models, to then be treated as satisfactory or not by participants (e.g. role-based prompting, such as asking a generative AI system to act as a travel agent or car mechanic when giving advice, planning etc.). §.§ Limitations There were limitations inherent to the Bing Chat interface which limited the kinds of behaviours we could explore. For example, some chat interfaces allow queries to be edited and re-submitted, but Bing Chat does not. If a participant wished to revise an earlier query, the best option was simply to submit the revised query as a new message, but the result might then be contaminated by the results from the previous version of the query due to the manner in which the context from the entire conversation is used in Bing Chat's responses. Nor was starting an entirely new conversation a good option, as participants often wished to continue and build on a successful conversation when revising a query. Moreover, there are features supported by other tools (e.g., ChatGPT supports plugins with varied functionality; Anthropic's Claude supports uploading and querying large documents) which we could not study. Thus, the choice of any particular tool will influence the scope of interactions which can be studied. The set of prompting strategies was developed by trial and error, guided by the experience and subjective judgments made by a particular set of researchers. There will be differences between how different groups approach the process of developing prompting strategies, and thus this aspect of the participatory prompting process is not easily reproducible. Making this process more consistent is an important avenue for research. As part of our protocol, each participant developed their own unique and personalised sensemaking task (Table <ref>). The themes emerging from a single participant engaging with a particular task may not generalise to other participants engaging with that same task. However, for our study this was an acceptable trade-off for three reasons. First, having a wider variety of tasks improves our coverage and generalisability of insights for data-driven sensemaking as a broad activity, which is more important than establishing generalisability for particular tasks. Second, having personal tasks developed by participants achieves ecological validity to a level that is very difficult to achieve using a synthetic suite of uniform tasks. Third, as previously mentioned, another aim of this study was to evaluate participatory prompting as a method, which more holistically and rigorously achieved using a diverse range of ecologically valid tasks. As noted in Section <ref>, when encountering the researcher's mediation and pre-prepared prompting strategies, participants reflected on their own lack of awareness and perceived deficiencies in prompting strategies. Many participants described their own unmediated prompting strategies as “too general” and reported difficulty understanding “where to start from.” To some extent this validates the utility of the participatory prompting protocol; by mediating participant requests and reformulating them according to effective prompting strategies, the protocol bypasses many potential sources of frustration and shallow experiential dead-ends that might derail a 1-hour interactive study and compromise the ability to study meaningful tasks. On the other hand, this reduces the external validity of these experiences, since participants will not have access to expert mediation during real work. The amount of mediation is therefore a balance that needs to be carefully struck, to avoid over-influencing the participants' workflow; enough intervention to enable interesting and meaningful interaction but not so much that the interaction is completely different to the kind that the participant might have had on their own. § CONCLUSION We studied how generative AI might affect the workflow of open-ended data analysis, i.e., sensemaking with data. We conducted participatory prompting sessions, in which participants worked with a researcher experienced in prompting strategies, to explore a data analysis problem of interest with the Bing Chat generative AI. Participants were asked to think aloud and reflect on the output at each turn of the conversation. The transcripts of the conversations with Bing Chat and the think-aloud data were thematically analysed. We found that generative AI was useful in both the information foraging loop (by reducing the manual effort required to search for relevant information) and in the sensemaking loop (by helping ideate hypotheses, and proposing strategies to test them). On the other hand, participants faced barriers to query formulation (such as expressing their intent in detail, and determining what context needed to be shared with the system); in the utility of the responses (such as being overwhelmed by the amount of information, the response failing to meet their needs, or being unable to understand unfamiliar concepts in the response); and to verification and trust (such as the manual effort of looking for supporting information, and detailed checking for hallucinations). The findings have design implications regarding balancing generative AI as a standalone application versus integration with other applications, helping users understand and provide context, managing the format and modality of responses, and metacognitive support. Besides viewing these as interaction design opportunities, we also highlight opportunities for technical research in machine learning to address some of these challenges. Further, we find that our data complements and extends our understanding of phenomena observed in previous research, such as the relationship of generative AI to search, creativity, common ground, folk theories, and metacognition. Finally, we reflect on the participatory prompting method as a research technique for eliciting opportunities and challenges for generative AI in knowledge workflows, consider its limitations, and how it might be applied to other domains. We thank our participants for their time, and our reviewers for their helpful feedback. ACM-Reference-Format
http://arxiv.org/abs/2407.01873v1
20240702011701
Automated Text Scoring in the Age of Generative AI for the GPU-poor
[ "Christopher Michael Ormerod", "Alexander Kwako" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Spatio-Temporal Graphical Counterfactuals: An Overview Mingyu Kang, Duxin Chen, Ziyuan Pu, Jianxi Gao, and Wenwu Yu, Senior Member, IEEE This work is supported by the National Key R&D Program of China under Grant No. 2022ZD0120004, the Zhishan Youth Scholar Program, the National Natural Science Foundation of China under Grant Nos. 62233004, 62273090, 62073076, and the Jiangsu Provincial Scientific Research Center of Applied Mathematics under Grant No. BK20233002. (corresponding authors: Duxin Chen, Wenwu Yu) Mingyu Kang is with the School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China. (e-mail: kangmingyu@seu.edu.cn) Duxin Chen is with the School of Mathematics, Southeast University, Nanjing 210096, China. (e-mail: chendx@seu.edu.cn) Ziyuan Pu is with the School of Transportation, Southeast University, Nanjing 210096, China. (e-mail: ziyuanpu@seu.edu.cn) Jianxi Gao is with the Department of Computer Science and Center for Network Science and Technology, Rensselaer Polytechnic Institute, Troy, New York 12180, USA. (e-mail: gaoj8@rpi.edu) Wenwu Yu is with the Frontiers Science Center for Mobile Information Communication and Security, School of Mathematics, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211102, China (e-mail: wwyu@seu.edu.cn). July 8, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Current research on generative language models (GLMs) for automated text scoring (ATS) has focused almost exclusively on querying proprietary models via Application Programming Interfaces (APIs). Yet such practices raise issues around transparency and security, and these methods offer little in the way of efficiency or customizability. With the recent proliferation of smaller, open-source models, there is the option to explore GLMs with computers equipped with modest, consumer-grade hardware–—that is, for the “GPU poor.” In this study, we analyze the performance and efficiency of open-source, small-scale GLMs for ATS. Results show that GLMs can be fine-tuned to achieve adequate, though not state-of-the-art, performance. In addition to ATS, we take small steps towards analyzing models' capacity for generating feedback by prompting GLMs to explain their scores. Model-generated feedback shows promise, but requires more rigorous evaluation focused on targeted use cases. § INTRODUCTION Generative language models (GLMs), such as GPT-4 <cit.> and Claude <cit.>, have demonstrated powerful performance across a variety of language and reasoning tasks. In the field of education, researchers are exploring the extent to which these models can perform tasks such as automated essay scoring <cit.>, providing feedback to students <cit.>, individual tutoring <cit.>, and more <cit.>. Although GLMs show promise in automating certain educative tasks, there are critical limitations that hinder the possibility of wider implementation. For instance, researchers have shown that GLMs can be “jail-broken" to bypass safety guardrails <cit.> and can disclose personally identifiable information. Large GLMs are extremely large, requiring millions of dollars to train and deploy; as such, they are highly inefficient for specialized tasks <cit.>. These models are constantly being updated, sometimes leading to degraded performance <cit.>, and they are only accessible via Application Programming Interfaces (APIs), which lead to issues around replicability and leave little room to conduct rigorous research. It is for these reasons that we shift the focus away from large, proprietary GLMs toward smaller, open-source GLMs. In this study, we focus on two educational applications: Automated Text Scoring (ATS) and providing feedback—specifically, feedback that justifies scores based on the scoring rubric. Our study is the first to demonstrate that it is possible to efficiently fine-tune such GLMs to yield high-quality scores, and that (at least some) feedback from fine-tuned models can explain these scores. Our data is drawn from the publicly available Automated Student Assessment Prize (ASAP),[ASAP Automated Essay Scoring: ASAP Automated Essay Scoring: https://www.kaggle.com/c/asap-aeshttps://www.kaggle.com/c/asap-aes; ASAP Automated Short Answer Scoring: ASAP Automated Short Answer Scoring: https://www.kaggle.com/c/asap-sashttps://www.kaggle.com/c/asap-sas] which allows us to compare more easily our results to other approaches, and share our findings more broadly. More specifically, our research goals are as follows: * Fine-tune four recently-released, relatively small (8 GB or less) open-source GLMs for Automated Essay Scoring (AES) and Automated Short Answer Scoring (ASAS). * Compare the performance of these GLMs for AES and ASAS, relative to current state-of-the-art (SOTA) benchmarks. * Prompt GLMs to explain the scores that they provided based on item-specific rubrics, and characterize patterns of feedback via qualitative analysis. The organization of this paper is as follows: In Section <ref>, we review the theoretical and empirical context surrounding ATS, feedback, GLM architectures, and GLM training. In Section <ref>, we detail the characteristics of the data, models, prompts, and training methods used in this. We review results in Section <ref>, which is divided into (A) automated scoring and (B) feedback (of essays and short answers, respectively). Finally, we discuss some of the ramifications of our findings in Section <ref>, and suggest avenues for future research. In addition to this paper, for greater transparency, we make publicly available the scores and feedback generated by our fine-tuned GLMs. § BACKGROUND §.§ Automated text scoring AES and ASAS have been active areas of research and development since as early as 1966 <cit.>. There is widespread acceptance that, when carefully constructed and monitored, AES and ASAS can deliver reliable scores <cit.>. For this reason, ATS has become common in educational assessment. From a machine-learning perspective, both AES and ASAS are text classification problems, but from a measurement perspective, they assess different abilities and may require different approaches. For instance, rubrics for essay scoring are often designed to evaluate attributes such as organization, argumentation, grammar, and spelling in lengthier written responses. In contrast, rubrics for short answer questions focus on assessing specific knowledge and comprehension, often independent of grammatical and spelling considerations. For this reason, an approach that works well for AES may not always be suitable for ASAS and vice versa. There have been a plethora of approaches applied to both AES and ASAS. Perhaps the oldest of these is known as the Bag of Words (BoW), which generally combines rules based on linguistic features in addition to a set of frequency-based features <cit.>. As Natural Language Processing (NLP) began incorporating neural network-based models, these models were applied to AES and ASAS. Early implementations of neural network-based scoring <cit.> used layers of recurrent units such as the long-short-term memory (LSTM) unit <cit.> and gated recurrent units (GRU) <cit.> with attention <cit.>. The most influential change to NLP has been the rise of attention <cit.> and the transformer architecture <cit.>. The use of transformer-based Large Language Models (LLMs), such as BERT <cit.>, to perform ATS is now well-established in both AES <cit.> and ASAS <cit.>. In the past few years, generative language models (GLM)s like ChatGPT <cit.> have garnered immense excitement from both the media and academic circles. These GLMs are pretrained on a large corpus and then instruction-tuned to perform a multitude of tasks <cit.>. Attempts at ATS with GLMs have focused primarily on large, proprietary models, e.g. <cit.>, which raises several concerns in an educational setting. Firstly, given that student data can include personally identifiable information, the reliance on an externally managed API poses a security risk. Secondly, since the weights are not publicly available, there is no ability to apply tools from explainable AI (xAI) <cit.>. From the viewpoint of sustainability, closed-source models can require much more resources to run and can be much more expensive in the long run. Some researchers have explored AES and feedback using small, open-source models: In <cit.>, there is an exploration of prompting strategies and machine evaluation of feedback correlates with human evaluation of feedback; it is also clear, however, that with respect to AES, in-context GLM performance remains far below that of fine-tuned classification models. §.§ Model-generated feedback If we limit our research into GLMs merely to improve existing scoring systems, then we will have missed out on the potential to enhance educational assessment. There is a growing call from educators, students, and other stakeholders for these models to be used to provide feedback. Although model-generated feedback holds potential value for educators, there remain substantial hurdles to producing feedback that is useful. These limitations revolve around the the quality of feedback itself, as well as the difficult endeavor of validating that the feedback is indeed useful in a given context. With respect to feedback quality, even large GLMs produce hallucinations. In the field of text generation, hallucination refers broadly to text that, while grammatically correct, is also nonsensical, unfaithful, unreliable, inaccurate, irrelevant, etc. <cit.>. With respect to validation, there is no methodology in the field that can be used to easily validate such feedback. There are, moreover, no easy-to-implement systems to capture feedback in an on-going way from educators, which makes development of process-oriented tools extremely challenging. Beyond technological limitations, there are social implications that need to be considered in the face of novel educational technologies. The Substitution Augmentation Modification Redefinition (SAMR) model for technological innovation and adoption in educational settings, for instance, has been critiqued for justifying hierarchical approaches to product development and implementation <cit.>. Technological advances which are described or marketed as educational tools need to be developed in tandem with teachers, administrators, and other educational practitioners. Although much of the enthusiasm (as well as economic pressure) behind feedback generation is warranted, this cannot supersede the need for taking a rigorous and ethical approach towards researching and developing such tools. §.§ Architecture of Generative Language Models for the GPU Poor In contrast to the large, proprietary GLMs that have dominated public attention, there is a concomitant open-source movement that strives to makes GLMs accessible to all. These relatively small, open-source models are typically released in  7Gb and  70Gb versions by researchers who are often affiliated with the same organizations that develop proprietary GLMs. For instance, Google recently released Gemma, Meta released Llama-3, and Microsoft released Phi-3. In contrast to their large, proprietary counterparts, these GLMs can run on (and can even be trained on) consumer-grade hardware, such as a single 24Gb GPU. That is, these models can be leveraged by the “GPU poor”, which includes most of us educational researchers. This open-source movement allows researchers to experiment directly with GLMs, and to explore targeted use cases in education. Researchers have just begun to explore smaller, open-source GLMs for ATS and feedback (e.g. <cit.>). Although performance generally increases with scale, smaller GLMs perform surprisingly well. GPT-4 and Claude are enormous, and it is no surprise that they dominate leaderboards, yet their smaller, open-source counterparts (which require only a fraction of the memory) are not far behind. One reason that smaller GLMs are not further behind is that, aside from small variations, they generally share the same architecture. Furthermore, within the current paradigm, there is a consensus among researchers that the primary bottleneck to increasing performance is data volume and quality, not model architecture. Current SOTA GLMs use a decoder-only architecture, sometimes combined with Mixture of Experts. The underlying design is actually simpler than the original transformer architecture advanced in Bidirectional Encoder Representations from Transformers (BERT, <cit.>). Following the advent of BERT, many researchers proposed variants of BERT that improved either the data <cit.>, architecture <cit.>, or training schemes <cit.> of the original model. These models were predominantly encoder-only models which were made into classifiers by replacing the linear layer that predicts masked tokens with another randomly initiated linear layer (i.e. the classification head). Encoder transformer-based pretrained language models are typically given a classification head, where the loss function is cross-entropy (e.g., see <cit.>).[It is also possible, though less common, to use the single target variant with a mean-squared error loss function <cit.>.] Many previous authors have applied transformer-based language models to AES and ASAS in this way <cit.>. Indeed, this is the current paradigm in most of AES and ASAS. While this paradigm (of affixing a classification head) could also be applied to GLMs,[Indeed, this w done with the first GPT model <cit.>] this disregards the relationship learned by the model between the linear layer that predicts tokens and the transformer layers. The final output layer, however, can be left as is, and fine-tuning can focus on the intermediate layers (e.g., using QLoRA <cit.>, described below). Because this form of fine-tuning preserves the relationship learned by the model between the linear layer, the models themselves retain much of their abilities as generative models when applied to more general tasks. This allows the models to be further prompted to produce feedback where the scores are at least able to be validated against known human-defined targets. The rapid growth of large language models, now reaching hundreds of billions of parameters, has introduced considerable engineering challenges for their large-scale deployment. A primary concern is training these enormous models within memory constraints. Generally, each parameter and its gradient are stored in 32-bit precision, requiring 4 bytes per trainable parameter. Advanced optimizers such as Adam with weight decay further increase memory consumption by storing additional data for each parameter. For example, fine-tuning a model with 7 billion parameters would typically need at least 28GB of video memory, excluding context length. To get around the typical memory requirements of GLMs, we employ a combination of two approaches: (1) quantization <cit.>, wherein parameters are stored at lower precision, and (2) Low-Rank Adapters (LoRA) <cit.>. The combination of these methods is commonly referred to as QLoRA <cit.>. Quantization converts the model's parameters from 32-bit floats to 4-bit NormalFloat data types <cit.>. Memory savings are further increased through double quantization, where the quantization constants themselves are also quantized. Despite using less memory, quantized models generally maintain robust performance. Additionally, memory can be further conserved by using 8-bit optimizers, which store variance and its square in 8-bit precision <cit.>. Low-rank adaptation (LoRA, <cit.>), is an increasingly popular method of parameter-efficient fine-tuning <cit.>. In the following section, we describe LoRA in detail. §.§ Training Generative Language Models for the GPU Poor LoRA is a powerful, parameter-efficient technique for fine-tuning GLMs. In combination with quantization, it makes it possible to fine-tune GLMs using less than 8Gb of memory, thereby making them more feasible for development and deployment. The central idea behind LoRA is that we seek to update the large feed-forward layers of the model by only considering a low-rank additive component, initially set to 0. Mathematically, we suppose a linear layer is represented by L(x) = W_0 x + b where W_0 ∈ℝ^d× k is the original pretrained weight matrix and x is the input. It is known that updates to the linear transformations are sparse and in many cases, approximated well by matrices of low-rank. We seek to update the weight matrix, W →W̃ in a single step by W̃ = W_0 + δ W = W_0 + BA where A ∈ℝ^r× k and B ∈ℝ^d× k. In this setting, it is expected that r << min(d,k) so that the number of trainable parameters is r(k+d). Typical values of r (e.g., 2< r < 32) are chosen such that the number of trainable parameters is far fewer than full-parameter fine-tuning. The advantages of LoRA include reduced memory requirements for saving fine-tuned models, more efficient training, no impact on inference speed, and the capacity for combination with other parameter efficient fine-tuning methods. The memory requirements for saving a fine-tuned large language model with LoRA are limited to size of the pairs of update matrices, which orders of magnitude smaller than the original model. Training is also more efficient and requires less GPU memory since gradients only need to be calculated for the update matrices. The impact on inference latency can be reduced to zero if the update matrices are added to the pretrained weights and subsequently removed from the model after loading. Finally, because the update matrices can be removed, LoRA can be combined with any other adapters <cit.>. § METHODS §.§ Data The Automated Student Assessment Prize (ASAP) AES and SAS datasets were originally made available to the public via two competitions hosted by Kaggle in 2012 <cit.>. The AES dataset encompasses a total of 12,978 essays, spanning 8 distinct stimuli.[We use the term stimuli or items instead of prompts, as the latter is easily confused with prompts used to query GLMs.] The SAS dataset consists of 17,043 total responses across 10 items that span various subjects, administered to students in grades 8 and 10 (depending on the item). Each response was scored by two human annotators. Accompanying the scored data are comprehensive scoring rubrics that include scoring guidelines and score ranges tailored to each stimulus. One of the advantages of using the AES and SAS datasets are that they are commonly used by other researchers, allowing us to compare our results with a wide range of previously established approaches. In order to maintain comparability with the extensive literature on these datasets, test-train splits were chosen to align with previous studies <cit.>. For the AES dataset, we follow the five-fold cross-validation defined by <cit.>. For the SAS dataset, we used the same splits used in previous studies (e.g., <cit.>. The (average) size of the training, development (or dev), and test sets for the AES and SAS datasets, in addition to some basic characteristics of the datasets, are presented in Table <ref> The scoring rubric for the AES dataset emphasizes proper spelling and grammar usage, logical organization with smooth transitions between ideas, and the ability to exhibit analytical comprehension backed by supporting evidence. The rubrics for essay set 1,7, and 8 do this by breaking the score into several traits. The final score is the sum of each of the trait scores. While some of the essay topics depend on a particular prompt, the rubric can be generally interpreted independently of any prompt. In contrast, the rubrics for the SAS items focus on specific pieces of information that need to be in a response in order to obtain a score. These short answer questions are designed to test knowledge and comprehension, hence grammar and spelling are not a part of the rubric. §.§ Performance Metric When evaluating the model performance, we compute Quadratic Weighted Kappa (QWK), which was the original metric specified in the Kaggle competitions <cit.>. A rough interpretation of this metric is that it measures the probability above chance that two raters agree: a QWK of 1 indicates exact agreement, 0 indicates random agreement, and -1 indicates perfect disagreement. This metric is also standard in the industry for comparing machine scoring performance <cit.>. §.§ Models In selecting models for our study, we prioritized those that could operate on standard consumer hardware while still delivering performance adequate for generating useful feedback. We identified four models that met these criteria and represented the forefront of open-source model development from major contributors in the field. These include (with affiliation in parentheses): Llama-3 (Meta), Mistral v0.2 (Mistral), Gemma-1.1 (Google), and Phi-3 (Microsoft). Table <ref> provides a brief overview of architectural characteristics, along with the total parameter count and references to their respective technical documentation. One model was trained for each item, resulting in a total of 40 trained models (4 model types x 10 items). §.§ Parameter-efficient fine-tuning Models were loaded through Huggingface-hub, quantized into smaller, 4-bit models using bitsandbytes, and trained using low-rank adaptors (LoRA). Learning rate was set to 2e-4 (except for Gemma-1.1, which was set to 1e-4 to ensure convergence), with a linear rate decay over 10 epochs. r and α, key parameters for LoRA, were each set to 32. Table <ref> lists how this r value affects trainable parameters and memory used for each of the four models. To ease GPU load, training data were not batched (i.e. batch size was 1), and context length was capped at 2,048 (note that this cap was not exceeded for any response). We used an early stopping criterion, based on best QWK performance on the development set, computed at the end of each epoch, within a span of 10 epochs. Models were trained on a 24GB A10 GPU. We calculated training and inference times of each model. Times were transformed so as to be relative to the training and inference times of a standard BERT-base classification model. Thus, for example, Mistral took 10.8 times longer to train than BERT, and 30.7 times longer to predict scores on the test set. The BERT model was trained in batches of 4, over the span of 20 epochs, and on the same hardware as the GLMs. §.§ Prompting for Score Prediction We used the following template to prompt the model for a score, given an item-specific max score, an item-specific rubric, and a student response (all indicated by curly brackets below). Note that “User” and “Assistant” role formats vary between models; roles were not entered into the prompt itself, but handled automatically via Huggingface’s apply_chat_template function. User You are a grading assistant. Assign a **Score** between 0 and {max_score} using the **Rubric** provided to a **Student Response** *Rubric** {item_rubric} *Student Response** {student_response} Assistant Score: Using the filled-out template as input, we constrained the model to generate one additional token. If the model generated a non-integer token, then the score was given a 0. §.§ Prompting for Feedback Generation After prompting for score predictions, we incorporated the predicted scores into another template to prompt the model for feedback generation. Although much of the feedback generation template is identical to the score prediction template, the model was prompted separately. A maximum of 256 new tokens were produced for AES feedback and 128 tokens for SAS. User You are a grading assistant. Assign a **Score** between min_score and {max_score} using the **Rubric** provided to a **Student Response** *Rubric** {item_rubric} *Student Response** {student_response} Assistant Score: {predicted_score} User Using the rubric, specify why you gave the response a score of {predicted_score}. Assistant[This last Assistant Prompt was only included for short answer items] The response was given a score of {predicted_score} because §.§ Qualitative Analysis of Feedback To characterize the differences in feedback provided by each of the 4 models, we sampled student responses with predicted scores that matched human rater scores. For the SAS dataset, we sampled responses across all possible score points for 2 science items (Items 1 and 10) and 2 ELA items (Items 3 and 7). We analyzed 13 student responses across 4 items (and 2-3 possible score points), for a total of 52 explanations. For the AES dataset, we sampled responses across all possible score points for 2 stimuli (Items 2 and 3). We analyzed 10 student responses across 4 items (and 4-6 possible score points) for a total of 40 explanations. In analyzing responses, we took a grounded approach (Creswell and Poth, 2016 – add citation). The philosophy behind grounded qualitative research is to let patterns emerge from the data, rather than approach the data with pre-defined codes or hypotheses. More specifically, analyses consisted of two phases. In the first phase, we read through responses, noted salient trends, summarized notes, and revisited notes for each response. In the second phase, we summarized these notes into general patterns and trends, and identified consistent and inconsistent examples in the data. § RESULTS Results are divided into four section: In sections 1 and 2, we present the results of fine-tuned GLMs on AES and ASAS, respectively; in sections 3 and 4, we characterize feedback after prompting GLMs to explain their scores based on item-specific rubrics, for AES and ASAS, respectively. §.§ Automated Essay Scoring Table <ref> presents the results of fine-tuned GLMs on performing AES on the ASAP-AES datset. We provide comparisons to several notable benchmarks pertinent to the task. These include the original human-human agreement score <cit.>, the BoW results reported in <cit.> and subsequent modifications using attention mechanisms <cit.>, the original BERT results <cit.>, the current SOTA performance <cit.>, “fine-tuned" GPT-3.5 <cit.>, and GPT-4 <cit.>. In addition to these important reference points, we also provide results from off-the-shelf, i.e. not fine-tuned, models (no asterisks) alongside fine-tuned models (indicated with asterisks). The fine-tuned generative models performed well compared to standard benchmarks. They exceeded performance of AES, BERT (base), fine-tuned GPT-3.5, and the combination of LSTM, CNN, and attention mechanisms. Although none of the models achieve the current SOTA performance (a distinction held by NPCR), each individual model surpasses many previous benchmarks. Fine-tuned GLMs also seem comparable, if not above, human-level performance.[Regarding comparability to human-human QWK, it should be noted that the models were trained on the resolved scores, which have different ranges than the original human scores. According to the rubric, the resolved scores are calculated as the sum of the two human scores for items 1, 7, and 8.] §.§ Automated Short Answer Scoring The performance of GLMs fine-tuned for ASAS are presented in Table <ref>. Fine-tuned models are indicated with asterisks. As with AES, there are a number of important results in the literature to compare against our own results. Firstly, there is the human agreement score <cit.>, the rule-based approach known as AutoSAS <cit.>, the current SOTA given by an ensemble of pretrained models <cit.>, “fine-tuned" GPT-3.5 <cit.>, and GPT-4 <cit.>. Results from non-fine-tuned versions of each of the 4 models (no have asterisks) are also included. In contrast with AES, the results of pertaining these large models offers comparable, but not superior, performance to BERT. The GLMs seem do outperform previous benchmarks on items 7 and 8; the results for Gemma and Mistral are above previously known models <cit.>. The performance on items 4 and 9, however, are lower than the benchmarks provided. §.§ Automated Feedback for Essay Scoring After GLMs predicted scores, we prompted them for feedback—in this case, an explanation for the score based on the scoring rubric. To illustrate the type of feedback generated by each of the four models, we present the feedback generated in response to an essay on item 1 (Table <ref>). The essays was assigned a score of 8 by all GLMs. By examining the feedback across items, responses, and models, we found that the feedback provided by fine-tuned versions of Mistral and Gemma tended to be more repetitive as the models seemed to settle into a loop more readily than Phi-3 and Llama-3. For stimuli where the rubric relied on external information, such as the understanding of a text, the language models struggled to produce sensible feedback and often only summarized and reiterated aspects of the response, rather than detailing why the score was assigned.[It is worth noting that the stimuli were very long and including the stimuli in addition to the full rubric would have exceeded the context limits we imposed for practical considerations. Secondly, in the case that the resolved score was the addition of the trait scores for each rater, the rubric described only the rater score, not the resolved score. So we employed a language model to summarize the differences between a high and low-scoring essay. Perhaps managing this better could lead to more constructive feedback.] The models seem to provide much clearer feedback when the rubric could be interpreted independently of the stimuli (i.e. 1, 2, 7, and 8). The most useful feedback overall seemed to come from fine-tuned versions of the Phi-3 and Llama-3 models. Even though they provided the most accurate explanations, they were not immune from repetition or errors. §.§ Automated Feedback for Short Answer Scoring In Table <ref>, we present feedback from for a 1-point response to Item #10. We selected this particular response because model feedback was typical of what we observed for other items and score points. For Item #10, to get full credit (2 points), the student had to (1) “describe how [a chosen color] might affect the inside of the doghouse” and (2) “use results from the experiment to support [their] description.” The student response for this particular example reads, “black. it might effect it,by using this color it can make the doghouse more warmer on summer days” (Id: 26865). The response does state that the color black would make the doghouse warmer (1 point), but fails to reference the experiment (0 points). Because it met 1 of the 2 criteria outlined in the rubric, it received a score of 1. Table <ref> provides the explanations given by each of the 4 models. Mistral did not produce an explanation for the score. Rather, it seemed to summarize part of the item stem, or perhaps it generated its own (student-like) response. It was common for Mistral to generate its own responses, which it would score, and subsequently produce another response and another score, and so on in a loop (not shown here). In the above example, Gemma seems to have produced a (student-like) response, and provides no explicit reference to the rubric. The response is separated into two, however, which may indicate some kind of pastiche, blending a response with the form of the rubric. Although not evident in this example, Gemma tended to summarize or repeat student responses in its explanations. These summaries were sometimes accompanied by relevant aspects of the rubrics. In contrast to Mistral and Gemma, Llama-3 referenced the student response in an evaluative way. It mentioned the color chosen by the student, and it quoted a phrase from the response (“it might effect it”) that could impact its score. At the end, Llama-3 summarized its explanation with a definitive, “Therefore, the response was given a score of 1,” as if it had produced a satisfying justification. Yet there are two serious flaws in Llama-3’s explanation. First, it included statements that contradict the student response, i.e., the response was not “unclear about what the color black would do to the temperature,” as Llama-3 claimed. And second, it omitted one of the criteria in the rubric (i.e. referencing the experiment), and entirely fabricated another in its place (i.e. the color does not have to be black, as implied). Although the explanation is appropriate in style, contains evaluative language, and references the student response, it misrepresents the rubric and the response. This was common of Llama-3 explanations, which were often odd combinations of the rubric and summaries of students’ responses. Lastly, Phi-3 provided a succinct and accurate explanation of why the student would receive a 1 for this response. Phi-3 was not infallible, but it often evaluated student responses with some justification of the score or explicit reference to the rubric. § DISCUSSION §.§ Summary In this paper, we have demonstrated that it is possible to fine-tune small, open-source GLMs to (1) achieve adequate performance for AES and ASAS and (2) generate appropriate rationales (at least in some cases) for predicted scores. Our method pushes beyond the paradigm of appending a classification head to a pretrained language model, yet avoids the many issues involved in querying large, proprietary GLMs via APIs. We find that parameter-efficient fine-tuning (using no more than a 24Gb GPU) for relatively small, open-source GLMs exceeds performance of proprietary GLMs that are orders of magnitude larger. Furthermore, due to the efficient nature of training checkpoints, the only parameters that are required to serve these models are the LoRA weights, which amount to less than 100 million parameters, fewer parameters than a BERT model. Given the widespread enthusiasm and fear around GLMs, it may come as a surprise that they did not lead to SOTA results. Ensembles of smaller LMs remain more efficient and performant than GLMs for AES and ASAS. One of the unique advantages of using GLMs is the ability to move beyond scoring alone—in this study, we prompt the fine-tuned models to provide an explanation of the score. We found that models were capable of (sometimes) generating adequate justifications, and that Phi-3 was more consistent than the other models. Yet this study does not undertake a thorough analysis of model-generated feedback. Although preliminary results are encouraging, rigorous analysis is needed. This would include carefully defined constructs of interest, collaboration with educators and trained human raters, and targeted use cases that identify whom the feedback is for, when the feedback should be provided, and what shortcomings need to be avoided. It is noteworthy, however, that fine-tuned GLMs were able to generate feedback at all, especially given that they were fine-tuned to predict scores (i.e. not feedback). It has been shown that, even with some a small amount of fine-tuning, model behavior can change dramatically <cit.>. The performance of the GLMs explored in this study are promising, particularly since they avoid the critical issues of proprietary models. Firstly, these models can be run securely and efficiently with relatively low requirements. Although security is not a concern when examining performance on a publicly-available dataset, it is a concern in many educational contexts, where personally identifiable information about students may be shared with the organization hosted the GLM. Secondly, in order to interpret the output of these models, we must be able to access the weights. The lower computational requirements of smaller, open-source models allows them to be more readily used in explainable AI workflows. Thirdly, we believe that GLMs used for educative tasks should be developed by educators and educational researchers. The open-source movement in AI permits some agency in developing these tools, without relegating decisions to a few tech-focused companies. The methods prescribed in this paper can be duplicated without recourse to industrial-scale compute power. §.§ Comparison to Proprietary GLMs With respect to scoring, our fine-tuned results far exceed those of “fine-tuned" GPT-3.5 for both AES <cit.> and ASAS <cit.>. We put “fine-tuned" in quotation marks because the fine-tuning procedure(s) available to the public are undisclosed and optimization (e.g. modulating the learning rate) is not currently available. Given that GPT-3.5 is vastly larger in size (175B) and requires far more computation <cit.> compared to the models explored in our study, it is surprising that its performance is so underwhelming. Our results are also superior to (non-fine-tuned) GPT-4 with respect to both AES <cit.> and ASAS <cit.>. It should be noted that fine-tuning is not currently available for GPT-4; yet even if fine-tuning were available and results were adequate, these would be subject to the same limitations outlined above. We note that our study does not undertake a comparison of feedback between large, proprietary GLMs and smaller, open-source GLMs; it may be that large GLMs excel in this area. §.§ Limitations As noted previously, this study does not attempt to provide quantitative empirical evidence regarding the validity of model-generated feedback. Model-generated feedback, although promising, requires more rigorous evaluation that should be undertaken in collaboration with educational practitioners. Even for the relatively humble task of providing an explanation for a score, models were far from infallible. More research is needed to validate that the model is consistently connecting scores to the rubric. There are others who are exploring the more complicated task of producing model-generated feedback that is useful to educational practitioners (e.g. <cit.>). Robust feedback systems likely require on-going evaluation, and may depend on human-in-the-loop frameworks. Although there is growing pressure to develop educational tools using GLMs, there is no easy method of validating feedback. At this stage, the validation of feedback should be a primary concern for the future for the use of GLMs in education. This may mean the creation of datasets that are focused on feedback, or the use of existing information, such as essay trait scores, to validate existing feedback. To help facilitate such analyses, We have open-sourced the feedback provided on a single validation sample in the hopes of prompting further analyses[https://github.com/christopherormerod/kaggle_aes_asas_feedback]. One thing that is fairly clear at this stage is that these models are computationally capable of being used in such a pipeline. The question remains, however, as to whether they are valid for carefully defined, targeted use cases. plain
http://arxiv.org/abs/2407.02207v1
20240702121402
Global calibration of large-scale photonic integrated circuits
[ "Jin-Hao Zheng", "Qin-Qin Wang", "Lan-Tian Feng", "Yu-Yang Ding", "Xiao-Ye Xu", "Xi-Feng Ren", "Chuan-Feng Li", "Guang-Can Guo" ]
quant-ph
[ "quant-ph", "physics.app-ph", "physics.optics" ]
http://arxiv.org/abs/2407.02062v1
20240702084943
Are Data Augmentation Methods in Named Entity Recognition Applicable for Uncertainty Estimation?
[ "Wataru Hashimoto", "Hidetaka Kamigaito", "Taro Watanabe" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Fuzzy synthetic method for evaluating explanations in recommender systems Jinfeng Zhong1 Elsa Negre2 July 8, 2024 ========================================================================= § ABSTRACT This work investigates the impact of data augmentation on confidence calibration and uncertainty estimation in Named Entity Recognition (NER) tasks. For the future advance of NER in safety-critical fields like healthcare and finance, it is essential to achieve accurate predictions with calibrated confidence when applying Deep Neural Networks (DNNs), including Pre-trained Language Models (PLMs), as a real-world application. However, DNNs are prone to miscalibration, which limits their applicability. Moreover, existing methods for calibration and uncertainty estimation are computational expensive. Our investigation in NER found that data augmentation improves calibration and uncertainty in cross-genre and cross-lingual setting, especially in-domain setting. Furthermore, we showed that the calibration for NER tends to be more effective when the perplexity of the sentences generated by data augmentation is lower, and that increasing the size of the augmentation further improves calibration and uncertainty. § INTRODUCTION Named Entity Recognition (NER) is a one of the fundamental tasks in Natural Language Processing (NLP) to find mentions of named entities and classify them into predefined categories. The predicted information by NER is essential for downstream tasks like event detection <cit.>, information retrieval <cit.>, and masking of personal user information <cit.>. Due to the demand, NER is the underlying technology for information extraction from text and documents. Based on the recent advances in Deep Neural Networks (DNNs), NER's performance is also improved like other NLP fields. In recent years, Pre-trained Language Models (PLMs) based architectures, such as BERT <cit.> and DeBERTa <cit.>, have been strong baselines in many NLP tasks, including NER. In general, however, DNNs are prone to miscalibration <cit.>, including PLMs <cit.>; calibration means the predicted confidence of the model aligns with the accuracy.[For example, a predicted confidence of 0.70 from a perfectly calibrated network should be 70% accuracy for that inputs.] The problem causes DNNs to make incorrect predictions with high confidence, which limits the applicability of DNNs on the number of domains where the cost of errors is high, e.g., healthcare and finance. Therefore, DNNs need to provide high prediction performance with appropriately calibrated confidence at the same time. Confidence calibration and uncertainty estimation methods are ways to solve the miscalibration of DNNs, and have been applied in NLP tasks such as text classification <cit.>, structured prediction <cit.>, question answering <cit.>, and machine translation <cit.>. However, many methods for confidence calibration and uncertainty estimation, typically Monte-Carlo Dropout (MC Dropout) <cit.>, are computationally expensive due to multiple stochastic inferences, making them difficult for real-world application. Data augmentation has also been applied for NER <cit.>, though, it was focusing on the generalization ability on low-resource data. In computer vision (CV) areas, data augmentation makes the model more robust to the input and leads to confidence calibrations <cit.>, in which the same labels are trained on different representations of the input than the original data. Based on the findings of these previous studies, there is a possibility that data augmentation in NER can improve confidence calibration without increasing inference time, in contrast to the conventional confidence calibration and uncertainty estimation methods. In this study, we conducted comprehensive experiments to analyze the impact of data augmentation methods for NER <cit.> on the confidence calibration and uncertainty in the cross-genre and cross-lingual settings on OntoNotes 5.0 <cit.> and MultiCoNER <cit.>, respectively. Our experiments yield several findings. First, some data augmentation methods in NER lead to improved confidence calibration and uncertainty estimation, especially in-domain. In particular, entity-prediction-based data augmentation <cit.> and entity replacement from the same entity type <cit.> show good performance. On the other hand, common confidence calibration methods, MC Dropout or TS <cit.> have worse confidence calibration and uncertainty estimation performance than the data augmentation methods in NER, even though the data augmentation methods do not aim to improve confidence calibration and uncertainty estimation. Moreover, increasing the augmentation size improves performance in confidence calibration and uncertainty estimation. The improvement tends to be better the lower the perplexity of the sentences generated by the data augmentation. Our code will be released after acceptance. § RELATED WORK Named Entity Recognition In the last decade, NER using DNNs has been widely successful; <cit.> reported a sequence-labeling model combining bi-directional LSTM with CRF (BiLSTM-CRF). <cit.> proposed contextualized character-level word embeddings combined with BiLSTM-CRF. In recent years, NER models based on PLMs, such as BERT <cit.>, RoBERTa <cit.>, and DeBERTa <cit.>, have achieved state-of-the-art performance. Uncertainty Estimation In general, DNNs are prone to miscalibration and overconfidence <cit.> especially without pretraining <cit.>. One way to estimate uncertainty is to run multiple stochastic predictions. Deep Ensemble <cit.> trains multiple DNN models and integrates their multiple stochastic predictions to make a final prediction. MC Dropout <cit.> applies Dropout <cit.> regularization at both training and inference time, and by taking multiple samples of the network outputs during inference. These are known to perform calibration well in many cases <cit.>, but their practical use is hampered by the fact that they make multiple probabilistic predictions. A relatively lightweight calibration method is the post-hoc approach. For example, temperature scaling <cit.> performs calibration via dividing logits by a constant, which is a simple and lightweight baseline. Data Augmentation Data augmentation methods are widely used in machine learning, CV, and NLP areas. More recent attention has focused on the provision of data augmentation methods to improve calibration and uncertainty. Test-time augmentation (TTA) <cit.> generates multiple samples during inference and integrates the predictions to estimate the prediction uncertainty. MixUp <cit.> uses linear interpolation between two samples to augment a new sample with soft labels, which has been investigated for situations where it is effective for calibration <cit.>. In NLP tasks, the impact of data augmentation on calibration in text classification has been investigated in recent study <cit.>, but only for In-domain (ID) and not for NER. Furthermore, it has been found that predictive performance is driven by data augmentation in NER <cit.>, but these studies have focused only on the predictive performance of NER and have not evaluated for calibration and uncertainty. This is the first study to comprehensively investigate the impact of data augmentation on calibration and uncertainty in NER, both in ID and OOD (Out-of-domain) settings. § METHODS In this section, we describe the popular baseline methods for confidence calibration and data augmentation methods for NER. Details about existing calibration methods are described in Appendix <ref>. §.§ Existing Calibration Methods Baseline Baseline uses the maximum probability from the softmax layer. Temperature Scaling (TS) TS <cit.> is a post-processing technique for calibrating the confidence scores outputted by a neural network. It involves scaling the logits (i.e., the outputs of the final layer before the softmax) by a temperature parameter T before applying the softmax function to obtain the calibrated probabilities. Label Smoothing (LS) LS <cit.> is prevalent regularization technique in machine learning, introduces a controlled level of uncertainty into the training process by modifying the cross-entropy loss. Monte-Carlo Dropout (MC Dropout) MC Dropout is a regularization technique that can be used for uncertainty estimation in neural networks, which requires multiple stochastic inferences <cit.>. We perform 20 stochastic inferences and output their average. §.§ Data Augmentation Methods for NER We investigate data augmentation methods in NER <cit.> for confidence calibration and uncertainty estimation. Label-wise Token Replacement (LwTR) LwTR uses binomial distribution to determine whether a token is replaced. The chosen token is randomly replaced with another token with the same label based on label-wise token distribution on training data. Thus, LwTR keeps the original label sequence. Mention Replacement (MR) Unlike LwTR, MR replaces an entity with another entity with the same label instead of a token. Other parts are the same as LwTR. Since entities can have multiple tokens, MR does not keep the original label sequence. Synonym Replacement (SR) SR is similar to LwTR except that SR replaces a token with its synonym in WordNet <cit.>. Since the synonym can have multiple tokens, SR does not keep the original label sequence. Masked Entity Language Modeling (MELM) MELM <cit.> performs data augmentation using a language model that predicts contextually appropriate entities for sentences in which entity parts are masked by entity markers. § EVALUATION METRICS We use Expected Calibration Error (ECE), Maximum Calibration Error (MCE), and Area Under Precision-Recall Curve (AUPRC) to evaluate confidence calibration and uncertainty estimation. §.§ Expected Calibration Error (ECE) ECE <cit.> measures the difference between the accuracy and confidence of a model. Specifically, it calculates the difference between the average confidence and the actual accuracy of the model on different confidence levels. Formally, ECE is defined as: ECE = ∑_b=1^B |𝒟_b|/n|acc( 𝒟_b) - conf( 𝒟_b) | where B is the number of confidence interval bins, 𝒟_b is the set of examples whose predicted confidence scores fall in the b-th interval, n is the total number of examples, acc(𝒟_b) is the accuracy of the model on the examples in 𝒟_b, and conf(𝒟_b) is the average confidence of the model on the examples in 𝒟_b. §.§ Maximum Calibration Error (MCE) MCE <cit.> is the maximum difference between the accuracy and the confidence of the model on different confidence levels. Formally, MCE is defined as: MCE = max_b=1^B |acc(𝒟_b) - conf(𝒟_b) |, MCE takes the maximum calibration error in each bin, not the expectation; a smaller MCE means that the model's predictions are less likely to be far off in a given confidence region. §.§ Area Under the Precision-Recall Curve (AUPRC) AUPRC is the summary statistic the relationship between precision and recall at different thresholds. The higher the value, the higher the overall precision at a given threshold. § EXPERIMENTAL SETTINGS §.§ Datasets We conducted experiments on two different NER datasets to evaluate the performance of confidence calibration methods in different settings. For the cross-genre evaluation, we used the OntoNotes 5.0 dataset <cit.>, which consists of six different genres, broadcast conversation (𝚋𝚌), broadcast news (𝚋𝚗), magazine (𝚖𝚣), newswire (𝚗𝚠), telephone conversation (𝚝𝚌), and web data (𝚠𝚋). This dataset is commonly used for NER evaluation in a cross-domain setting <cit.>. For the cross-lingual evaluation, we used the MultiCoNER dataset, which is a large multilingual NER dataset from Wikipedia sentences, questions, and search queries <cit.>. We selected English as the source language and English, German, Spanish, Hindi, and Bangla as the target languages. The details of the dataset statistics are provided in Table <ref>. §.§ Training Details In all experiments, we train out models on a single NVIDIA A100 GPU with 40GB of memory. We used MIT-licensed mDeBERTaV3 () <cit.> whose model size is 278M, as a multilingual transformer encoder from Hugging Face 𝚝𝚛𝚊𝚗𝚜𝚏𝚘𝚛𝚖𝚎𝚛𝚜 <cit.> pre-trained model checkpoints, and extracted entities via sequence labeling. Cross-entropy loss is minimized by AdamW <cit.> with a linear scheduler <cit.>. The batch size is 32, and gradient clipping is applied with maximum norm of 1. The initial learning rate was set to 1e-5. To avoid overfitting, we also applied early stopping with patients=5. For the temperature parameter in TS, we used Optuna <cit.> to optimize the temperature parameter based on dev set loss with a search range of [0.001, 0.002, ..., 5.000] in 100 trials. In addition, we optimized the binomial distribution parameter for data augmentation methods using the dev set by a grid search in the range of [0.1, 0.2, ..., 0.8]. In LS, we conducted a grid search in the range of [0.01, 0.05, 0.1, 0.2, 0.3] to optimize the smoothing parameter. In the case of MELM, mask rate η during fine tuning and mask parameter μ during generation are hyperparameters. We conducted a grid search for each hyperparameter in the range [0.3, 0.5, 0.7], as in <cit.>. We perform each experiment 10 times using different random seeds, collect evaluation metric values, and report their average and standard deviation. For convenience, the reported values are multiplied by 100. §.§ Evaluation Details The NER model calibration is evaluated based on the "Event of Interests" concept introduced in the previous study <cit.>. Since the full label space |𝒴| is large for structured prediction tasks such as NER, we focus instead on the event set L(x), which is the set containing the events of interest E ∈ L(x) obtained by processing the model output. There are two main strategies for constructing L(x): The first strategy is to construct L(x) only from the events obtained by the MAP label sequence prediction of the model; The second strategy is to construct L(x) from all possible label sequences; The first strategy is easy to obtain events, but the coverage of events is low depending on the model's prediction. The second strategy provides a high coverage of events, but is computationally expensive to obtain events. <cit.> is based on the first strategy, where the entities extracted by the NER model are calibrated on the basis of forecasters (e.g., gradient boosting decision trees <cit.>), which are binary classifiers separate from the NER model. Since the training dataset for forecasters consists of entities extracted by the NER model, more entities are needed to improve the uncertainty performance of the forecasters. Therefore, for example, the top-k Viterbi decoding of the CRF is used to increase the entity coverage and the size of the forecaster's training dataset. On the other hand, <cit.> is based on the second strategy, where it introduces a method to find the probability that a span has a specific entity type for datasets with short sequences, such as WikiAnn <cit.>, with restricted token sequences and span lengths. However, this method is computationally difficult for datasets with longer token sequences and more complex label spaces, such as OntoNotes 5.0 and MultiCoNER, because the number of spans explodes. We therefore simplify the evaluation process by measuring the calibration of the entity span obtained from the NER model's MAP label sequence prediction of the model. Uncertainty performance is evaluated by taking the product of the probabilities of each token corresponding to an entity as the probability of one entity. § RESULTS AND DISCUSSION We present the performance of cross-genre and cross-lingual confidence calibration and uncertainty estimation as the main results. The cross-genre evaluations are quantified by learning on a training set in one genre and evaluating calibration and uncertainty on a test in another genre. Similarly, in the cross-lingual evaluations, we train the model in one language (in this research, we use English; 𝙴𝙽) and evaluate the calibration and uncertainty on a test set in another language. §.§ Cross-genre Evaluation The results shown in Table <ref> demonstrate ECE and MCE in OntoNotes 5.0 for NER in the ID setting, which the source domain and target domain are the same. The table results show that data augmentation methods consistently have better calibration performance than TS, LS, and MC Dropout, which have been considered to work for general classification problems, in the evaluation of calibration performance, in the ID setting. In particular, when the source genre is 𝚝𝚌, MELM and other data augmentation methods show superior calibration performance, with up to 6.01 % improvement for ECE and 5.62 % improvement for MCE compared to Baseline. As shown in Table <ref>, the 𝚝𝚌 domain is not a data-poor setting, where there is sufficient training data and data augmentation is generally effective. MR and SR also show good calibration performance following MELM. Moreover, we can see that applying data augmentation methods do not increase inference time (See Appendix <ref> Table <ref>). On the other hand, as Table <ref> shows, when the target domain is OOD, especially when the target (e.g. OntoNotes 5.0 𝚠𝚋) is far from the source domain, the degree of improvement in the uncertainty estimation performance of data augmentation is not large, and sometimes even decreases. We presume that the augmented data is not far from the original training set, because data augmentation methods we targeted in this study are based on the replacement of tokens or entities. Considering a recent study that indicates models tend to be more overconfident in areas with less training data <cit.>, we can consider calibration performance in OOD sets, especially far from the source domain, will not improve by data augmentation for NER, while the performance in ID sets will be better than existing methods. To illustrate this, we performed t-SNE <cit.> for the token embeddings with only entity token from trained Baseline model, shown in Figure <ref>. We can understand that the token embeddings from augmented data are near the train set or ID test set, while the OOD test sets have some poorly covered regions. Generating sentences that are distant from the training data set and semantically aligned entities from label description for uncertainty estimation is an interesting direction for future research. AUPRC scores are shown in Table <ref>. Among the existing methods, TS shows superior performance; in data augmentation methods, MELM is not as good as in the case of calibration metrics such as ECE and MCE, and MR tends to show superior uncertainty performance. §.§ Cross-lingual Evaluation The results of cross-lingual transfer in MultiCoNER are shown in Table <ref> with English as the source language. MR performs better in uncertainty performance for the ID situation. In contrast to the calibration and uncertainty performance in the cross-genre setting, both MR and SR show better calibration and uncertainty in the OOD setting. In <cit.>, the result shows that the larger the linguistic distance <cit.>, the more lenient the calibration and uncertainty estimation tends to be, and similar trends are obtained in this experiment. Unlike the discussion in Section <ref>, the uncertainty performance by data augmentation is also good for OOD in cross-lingual setting because the areas where only target set exist is limited in MultiCoNER (illustrated in Appendix <ref>). On the other hand, MELM, which tends to show excellent calibration performance in cross-genre calibration, does not show good performance in cross-lingual settings. The amount of data for each language in the CC100 <cit.> dataset used to train the base model, mDeBERTaV3, was highest for English, followed by German, Spanish, Hindi, and Bangla which correlates with the trend of the calibration results. Moreover, as mentioned in <cit.>, languages that tend to have vocabulary overlap between languages in tokenization perform better in cross-lingual transfer in NER. Similar effects may be observed in confidence calibration and uncertainty estimation. §.§ Detailed Analyzes We investigate the effects of entity overlap rates and the perplexity of the generated sentences to gain a better understanding of the confidence calibration and uncertainty estimation performance of data augmentation methods for NER. We also investigate the impact of data augmentation size in several settings. §.§.§ Impact of Augmentation Size To investigate the impact of data augmentation size on calibration and uncertainty performance, we analyze the trend of evaluation metrics in 𝚝𝚌 → 𝚖𝚣 scenario of OntoNotes 5.0 and 𝙴𝙽 → 𝙴𝚂 scenario of MultiCoNER, respectively. Figure <ref> and <ref> illustrate the results in the ID and OOD settings, respectively. In many cases, MR improves the calibration and uncertainty performance by increasing data. SR consistently improves as the dataset size doubles, whereas LwTR demonstrates only marginal improvement or even worsens as the dataset size increases. Finally, MELM improves further for OntoNotes 5.0 𝚝𝚌, which shows excellent performance, and deteriorates further for MultiCoNER 𝙴𝙽, which shows poor performance. These results show that the calibration algorithm with the best performance for cross-domain transfers is likely to have better performance as the augmentation size is increased. On the other hand, increasing the augmentation size in MR improves the calibration and uncertainty performance compared to similar other data augmentation methods. Since data augmentation by MR and MELM is performed only on the entity region, the uncertainty estimation performance is relatively less adversely affected by increasing the data augmentation size. On the other hand, in SR and LwTR, data augmentation that replaces tokens may often inject tokens with inappropriate parts of speech for that sentence, so increasing the data augmentation size often leads to a degradation of uncertainty estimation performance. §.§.§ Impact of Perplexities for Augmented Sentences To investigate the influence of replacement units on data augmentation for NER as mentioned in Section <ref>, we measured the perplexity of the augmented sentences using GPT-2 <cit.>. The average perplexities of the augmented sentences and the average perplexities of the original training set for each dataset are shown in Table <ref>. Lower perplexity from augmented sentences tends to improve calibration performance and uncertainty performance. Consistently, the average perplexity of the sentences generated by MR is the lowest. Since MR performs substitutions on an entity-by-entity basis and does not affect the structure of the sentence itself, it has the lowest perplexity among the data augmentation methods in NER. MELM has the second lowest perplexity after MR, and may be adversely affected by generated entities that are adapted to the context but not actually present. § CONCLUSION In this paper, we investigated the impact of data augmentation on the confidence calibration and uncertainty estimation in NER in terms of genre and language, using several metrics. First, we find that MELM, MR, and SR lead to better calibration and uncertainty performance in the ID setting consistently. On the other hand, in the OOD setting, uncertainty estimation by data augmentation is less effective, especially when the target domain is far from the source domain. Second, our results suggest that the lower the perplexity of the augmented data, as in MR, the further better the calibration and uncertainty performance as the augmentation size is increased. Data augmentation methods for NER do not require changes to the model structure and only require more data to improve entity-level calibration and performance without the need to change the model structure. Our findings indicate the effectiveness of uncertainty estimation through data augmentation for NER, and will be expected to stimulate future research based on their limitations. § LIMITATIONS While this experiment provided valuable insights into the impact of data augmentation on confidence calibration and uncertainty estimation in NER across different genres and languages, there are several limitations that should be acknowledged. First, due to resource limitations, the experiment was limited to evaluation with English as the source language. To effectively investigate the calibration and uncertainty of zero-shot cross-lingual transfer, it is important to expand the investigation to include a wider range of languages as the source language. Therefore, future research should prioritize the investigation of calibration and uncertainty performance using different languages as the source for zero-shot cross-lingual transfer. Second, as mentioned in Section <ref>, regarding the calibration and uncertainty evaluation policy, we simply evaluated an entity span as a single data instance, but a rigorous evaluation method that performs evaluation while considering multiple span candidates has been proposed <cit.>. Establishing span-level NER calibration evaluation methods that can efficiently and comprehensively evaluate calibration and uncertainty for entity types for datasets with many entity types and long sequence lengths is a topic for future research. Lastly, we broadly evaluated the calibration and uncertainty performance in both cross-genre and cross-lingual settings on data augmentation for NER, but only using sequence labeling-based methods. Recently, other paradigms in NER, such as span-based methods <cit.> and Seq2Seq (sequence-to-sequence)-based methods <cit.>, have been proposed. In the future, the calibration or uncertainty performance of these methods could be evaluated. § ETHICAL CONSIDERATIONS In this study, we used existing datasets that have cleared ethical issues. Furthermore, the data augmentation methods we used for uncertainty estimation are substitution-based methods except for MELM, and MELM generated entities from existing datasets that have no ethical issues. Therefore, it is unlikely that toxic sentences would be generated. § ACKNOWLEDGEMENTS The authors also acknowledge the Nara Institute of Science and Technology's HPC resources made available for conducting the research reported in this paper. § SELECTION OF EVALUATION METRICS We use multiple calibration and uncertainty metrics to evaluate a wide range of calibrations or uncertainties. As mentioned in <cit.>, we know which uncertainty estimation metrics favor which model in different uncertainty use cases. In accordance with <cit.>, consider a situation where, in a three-class classification task, Model A yields a confidence of 0.95 with 95 % accuracy, and Model B yields a confidence of 0.6 for correct answers and 0.4 for incorrect answers with 40% accuracy. ECE and AURC prefer Model A. However, when E-AURC (× 1e3) is evaluated with 100 data instances for each case of Model A and Model B as a toy case, the result is 69.85 for Model A and 0 for the Model B case. Therefore, E-AURC prefers Model B. Similar calculations show that AUPRC also prefers Model B. In situations where it is not possible to refrain from making predictions, the evaluation metric that prefers Model A is desirable, while in situations where one wants to make selective predictions <cit.>, the evaluation metric that prefers Model B is better.Figures <ref> and <ref> show Kendall's τ between the rankings of the average scores of the algorithms in each evaluation metric for cross-genre evaluation in OntoNotes 5.0 and cross-lingual evaluation in the MultiCoNER dataset. In many cases, ECE correlates strongly with MCE, but its correlation with E-AURC is weaker. Moreover, E-AURC also tends to correlate more strongly with AUPRC than with ECE or MCE, reflecting well the uncertainty or calibration settings that the evaluation metrics prefer, as described earlier. § INFERENCE TIME § DETAILS OF EXISTING CALIBRATION METHODS In this section, we describe the popular baseline methods for confidence calibration. We use the following notations: z_i denotes the logits for class i, p_i denotes the calibrated probability for class i, y_i denotes the label for class i, and K denotes the number of classes. §.§ Temperature Scaling (TS) TS <cit.> is a post-processing technique for calibrating the confidence scores outputted by a neural network. It involves scaling the logits (i.e., the outputs of the final layer before the softmax) by a temperature parameter T before applying the softmax function to obtain the calibrated probabilities. The softmax function takes a vector of logits z and returns a distribution p: p_i = exp(z_i/T)/∑_j=1^K exp(z_j/T) . §.§ Label Smoothing (LS) LS <cit.> is a regularization technique used to improve the calibration and generalization performance of the model. By introducing a small degree of uncertainty in the target labels during training, label smoothing mitigates overfitting and encourages the model to learn more robust and accurate representations, ultimately contributing to improved overall performance on the task at hand. LS is characterized by introducing a smoothing parameter ϵ and smoothed label y^LS_i as follows, y^LS_i = y_i (1 - ϵ) + ϵ/K. §.§ Monte-Carlo Dropout (MC Dropout) MC Dropout is a regularization technique that can be used for uncertainty estimation in neural networks <cit.>. In this method, we need to run the model M times with different dropout masks and take the average softmax output over all the runs (We use M = 20). The procedure can be represented using the following formula: p_i = 1/M∑_t=1^M exp(z_i^(t))/∑_j=1^K exp(z_j^(t)). § F1 SCORES Table <ref> and <ref> show F1 scores. Note that in many cases, data augmentation methods do not degrade predictive performance itself, but MELM often significantly degrades predictive performance in some cases, especially when the source domains are 𝚗𝚠 and 𝚝𝚌. Considering Section <ref> and <ref>, MR improves calibration and uncertainty performance in many cases without degrading predictive performance. § MORE RESULTS ABOUT TEST SET DUPLICATION Table <ref> shows the results of the percentage increase in entity duplication that are new overlaps with each target domain’s test set when applying each data augmentation method except MR, where the source domains are 𝚋𝚌, 𝚋𝚗, and 𝚗𝚠. In all cases there is only a small increase. These results and the MR, which shows good calibration and uncertainty performance indicated from Section <ref> and <ref>, do not increase the number of new entities in the training data set suggest that the entity overlap rate does not affect calibration and uncertainty estimation. § IMPACT OF NEW ENTITIES VIA DATA AUGMENTATION To investigate the impact of new entities added by data augmentation methods on calibration performance, we measured the percentage of new entities added in the training data and the percentage of new entities that overlap with the test set. Table <ref> shows the percentage of new entities increased by data augmentation with the train set as the source domain in each dataset. In all data sets, MELM has observed the most increase of the new entities in the augmented data set. On the other hand, MR that shows good calibration performance followed by MELM does not increase the number of new entities because the replacement is based on the entities in the original training data. Furthermore, the entities generated have little overlap with the target domain, as shown in Table <ref>. Therefore, new entities by data augmentation methods for NER are likely to have no effect on calibration performance or uncertainty performance. § T-SNE PLOT FOR MULTICONER DATASET To overview of the ID and OOD data instances in the MultiCoNER dataset, t-SNE plot is shown in Figure <ref>. § RESULTS FOR LOW-RESOURCE LANGUAGE To investigate the uncertainty estimation performance for low-resource language, we additionaly show the results of 10,000 examples of Bangla (𝙱𝙽) from MultiCoNER dataset in Table <ref> when source language is 𝙴𝙽. The results show that data augmentation is also effective in uncertainty estimation for low-resource language. § LICENSES OF DATASETS OntoNotes 5.0 can be used for research purposes as described in <https://catalog.ldc.upenn.edu/LDC2013T19>. MultiCoNER dataset is licensed by CC BY 4.0 as described in <https://aws.amazon.com/marketplace/pp/prodview-cdhrtt7vq4hf4>.
http://arxiv.org/abs/2407.03106v1
20240703134420
Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric
[ "Xiruo Jiang", "Yazhou Yao", "Xili Dai", "Fumin Shen", "Xian-Sheng Hua", "Heng-Tao Shen" ]
cs.CV
[ "cs.CV" ]
Anti-Collapse Loss for Deep Metric Learning Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric Xiruo Jiang, Yazhou Yao, Xili Dai, Fumin Shen, Liqiang Nie, Heng-Tao Shen X. Jiang, Y. Yao are with the School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. X. Dai is with the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China. F. Shen and H. Shen are with the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. L. Nie is with the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, China. July 8, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Deep metric learning (DML) aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval. Prior literature predominantly focuses on pair-based and proxy-based methods to maximize inter-class discrepancy and minimize intra-class diversity. However, these methods tend to suffer from the collapse of the embedding space due to their over-reliance on label information. This leads to sub-optimal feature representation and inferior model performance. To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss. Specifically, our proposed loss primarily draws inspiration from the principle of Maximal Coding Rate Reduction. It promotes the sparseness of feature clusters in the embedding space to prevent collapse by maximizing the average coding rate of sample features or class proxies. Moreover, we integrate our proposed loss with pair-based and proxy-based methods, resulting in notable performance improvement. Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods. Extensive ablation studies verify the effectiveness of our method in preventing embedding space collapse and promoting generalization performance. Our code has been available at: <https://github.com/NUST-Machine-Intelligence-Laboratory/Anti-Collapse-Loss>. Deep Metric Learning, Image Retrieval, Embedding Space, Coding Rate. § INTRODUCTION Learning compact and generalizable representations has been one of the most critical steps in various machine learning tasks, including image retrieval <cit.>, face recognition <cit.>, and image classification <cit.>, semantic segmentation <cit.>, few-shot image classification <cit.>. Focusing on this goal, deep metric learning aims to learn a discriminative embedding space in which distances between semantically similar samples are close while those between dissimilar ones are far apart <cit.>. Existing methods can be mainly divided into two groups: pair-based and proxy-based. Pair-based methods seek to learn a better embedding space through relations between sample pairs. For example, <cit.> proposes to construct an embedding space by employing contrastive loss to enlarge inter-class distances and shrink intra-class distances. Triplet-loss-based methods <cit.> propose to adopt sample triplets to assist in metric learning. By employing the triplet of anchor, positive, and negative data points for embedding space optimization, distances between positive pairs are strictly constrained to be smaller than those between negative pairs, thereby improving the generalization performance. However, these methods tend to generate numerous sample pairs or triplets, posing a computation challenge in practical applications. Another line of research focuses on proxy-based methods for deep metric learning <cit.>. These methods propose to optimize the embedding space by maximizing similarities between sample embeddings and their associated class proxies. Contrary to pair-based methods, proxy-based ones only require class proxy-sample units, whose number is far smaller than sample pairs or triplets, thus resulting in significantly lower computational overhead. Moreover, the elimination of redundant information enhances the generalization performance. However, similar to pair-based approaches, proxy-based works devote more attention to constructing discriminative embedding space based on label information, leading to reliance on sample annotations. This issue is prone to causing a collapse in the embedding space during the training process, thereby resulting in sub-optimal model performance. To this end, we propose a simple yet effective method, dubbed Anti-Collapse Loss, to provide guidance for maintaining the structure of the embedding space. Our method is inspired by Information Theory <cit.> and Maximal Coding Rate Reduction <cit.>, aiming to reduce reliance on sample labels to avoid embedding space collapse. As shown in Fig. <ref>, our proposed loss can act as a flexible off-the-shelf module and be seamlessly integrated with existing pair-based and proxy-based methods. Specifically, we use the Anti-Collapse Loss to handle all the actual samples involved in training, maximizing the average coding rate of sample features within the dataset to prevent the collapse of the embedding space. However, this operation also introduces a challenging drawback of Maximal Coding Rate Reduction <cit.>. That is, this method requires solving large determinants to estimate covariance, leading to significant computational costs and thereby substantially reducing training efficiency. To solve the problem of excessive computational consumption, we eliminate the compression term with the highest computational consumption in MCR^2 <cit.>. We replace all data samples with class proxies, thus significantly reducing the computation required to solve determinant matrices. In this way, our Anti-Collapse Loss achieves structural maintenance of the embedding space by maximizing the average coding rate of all class proxies. Resorting to our proposed loss, learned sample features are distributed reasonably in the embedding space, ensuring the sparseness of feature clusters and thus avoiding space collapse. Accordingly, samples of different categories are assured to be maximally disjointed in the embedding space, leading to stronger discriminability. Simultaneously, the embedded space maintained by our proposed Anti-Collapse Loss can effectively avoid overfitting issues, thereby achieving improved generalization performance. Our contributions can be summarized as follows: (1) We propose a novel loss function, dubbed Anti-Collapse Loss, to prevent the collapse in the process of learning discriminative feature embeddings for conventional DML algorithms. (2) Our proposed loss can be integrated with DML methods and address the computational challenge by eliminating the intra-class average coding rate term and replacing data samples with class proxies. (3) Our proposed loss can advocate the sparseness of feature clusters in embedding space, thereby preventing space collapse and enhancing model generalization performance. (4) Extensive experiments and ablation studies for DML-based image retrieval tasks on benchmark datasets demonstrate the superiority of our proposed approach. The structure of this paper is as follows: In Section <ref>, we conduct a detailed and comprehensive review of related studies; Section <ref> provides a detailed introduction to our method, while Section <ref> showcases the qualitative and quantitative retrieval results of our method on multiple image retrieval datasets, along with presenting ablation studies. Finally, in Section <ref>, we conclude our work. § RELATED WORK §.§ Manifold Learning Manifold learning aims to discover and model low-dimensional manifold structures from high-dimensional data. When manifolds are linear, subspace clustering methods <cit.> can be used to cluster the manifolds. There are various methods for handling nonlinear multi-manifold clustering problems, among which the Maximum Code Rate Reduction (MCR^2) principle <cit.> is one of the most representative works in recent years. MCR^2 estimates the overall compactness of finite samples using the distortion rate concept in information theory. It optimizes the embedding space by leveraging the difference between the overall compactness scale and the sample-averaged coding rate which describes the intra-class compactness of specific categories. Our proposed method draws inspiration from the MCR^2 principle and utilizes the global average coding rate term from this algorithm to optimize the embedding space. Our goal is to optimize the embedding space structure, reduce the influence of class labels on the embedding space, and make our method more generalizable. Since our proposed method already has a deep metric learning clustering term, to avoid duplicating the functionality of modules, we remove the compression term from MCR^2. Meanwhile, we also notice the significant computational cost associated with estimating the covariance in MCR^2. This cost arises from the need to calculate the determinant of large matrices using all samples. Our proposed Anti-Collapse Loss can effectively address this issue by leveraging proxies. §.§ Deep Metric Learning The purpose of metric learning is to accurately measure the similarity between samples in a high-dimensional data space, thereby creating a distance relationship where samples of the same class are closer while samples of different classes are farther apart. This inter-sample relationship is important for tasks like classification and clustering. Therefore, metric learning methods have practical value in tasks such as image retrieval and classification. However, early metric learning <cit.> has some drawbacks. Firstly, it often relies on manually designed features, which may be challenging to capture the advanced representations of data. When dealing with complex and high-dimensional data, manually selecting appropriate features can become difficult. Secondly, some traditional metric learning methods may exhibit lower efficiency when handling large-scale data, especially in scenarios requiring pairwise comparisons, leading to a significant increase in computational complexity. Additionally, some traditional metric learning methods may lack good generalization ability in new domains or tasks because they overly focus on specific data representations and metric methods. With the flourishing development of deep learning <cit.> and the rapid progress of computational acceleration hardware, metric learning is gradually transcending traditional limitations. Deep learning has significantly benefited metric learning by efficiently modeling complex data relationships and enabling end-to-end feature representation learning. Convolutional Neural Networks <cit.> have improved the ability of models to learn hierarchical features, making them better suited to handle intricate data structures. With end-to-end learning, models can automatically learn features without the need for manual feature design, thus significantly increasing their expressive capability. At the same time, the rapid development of computational acceleration hardware has greatly accelerated the training process on large datasets, thereby eliminating the computational bottlenecks. The parallel computing capabilities of Graphics Processing Units (GPUs) have notably accelerated the training of deep learning models, making it easier for these models to handle complex nonlinear data relationships. By introducing deep learning, metric learning can achieve end-to-end learning of data representations, reduce dependence on manually designed features, enhance the model's generalization ability, and better adapt to large-scale datasets. Deep metric learning aims to learn an embedding space using deep neural networks, where complex high-dimensional data is mapped into low-dimensional features <cit.>. Similar samples are closer to each other in this space, while different class features are apart. Most methods in this field can be divided into two groups: pair-based <cit.> and proxy-based <cit.>. Pair-based methods directly optimize the distance or similarity between samples. The Siamese network <cit.> is among the early works introducing deep learning into metric learning. Most deep metric learning loss functions are inspired by contrastive <cit.> and triplet loss <cit.>. MS Loss <cit.> combines multiple types of similarities, considering both intra-class and inter-class relationships more comprehensively. On the other hand, proxy-based methods utilize the metric relationship between category proxies and samples for classification and clustering. ProxyNCA <cit.> and ProxyAnchor <cit.> are representative methods in this category. The advantage of proxy-based methods lies in their computational efficiency and scalability. Compared to pair-based methods that require pair comparisons, the complexity of distance computation is significantly reduced by using proxies as representatives of samples. To address the collapse issue in existing metric learning methods caused by reliance on label information, our proposed Anti-Collapse Loss optimizes the structure of the embedding space by maximizing the coding rates of all samples or proxies. Additionally, we incorporate the concept of proxies into the coding rate function to enhance the model generalization ability while maintaining a lightweight property for the coding rate function. § THE PROPOSED METHOD Existing DML methods emphasize the construction of embedding spaces. However, they pay less attention to accurately representing the inherent geometric or statistical characteristics of the sample feature distribution within the embedding space. This leads to a situation where existing methods excessively rely on labelled data without using sufficient knowledge of the underlying data distribution. Such reliance on labelled data negatively impacts model generalization ability. We design a new loss function based on the coding rate to prevent the collapse of the embedding space. Specifically, our proposed Anti-Collapse Loss significantly enhances the coding rate differences between the entire dataset and clusters of different classes while reducing the explicit dependence of the model on labels. As shown in Fig. <ref>, we present the Anti-Collapse Loss's functionality in maximizing sample and proxy coding rates during the training process. Additionally, we demonstrate the space collapse issue caused by existing pair-based and proxy-based deep metric learning algorithms. We assume that the backbone network ℬ(𝒮,θ) extracts feature x_i^* from sample s_i∈ℝ^D, where i∈{1,2,...,N}, 𝒮={s_1,s_2,...,s_N} represents the dataset with 𝒮∈ℝ^N × D, θ denotes the network parameters, and N represents the number of samples. Each sample s_i corresponds to a class label y_i ∈𝒴, where 𝒴={y_1,y_2,...,y_m}. Next, we project the feature x_i^* onto a unit hypersphere embedding space with normalization. The normalized features are represented as 𝒳={x_1,x_2,...,x_N}, where 𝒳∈ℝ^N × d and d represents the dimensionality of the features. §.§ The Rate Distortion Function The optimal coding methods for independently and identically distributed (i.i.d.) samples with a known probability distribution p(𝒳) have been extensively analyzed and explored in information theory <cit.>. However, the enrichment and complexity of data in classification tasks reduce the applicability of optimal coding methods based on the probability distribution p(𝒳). For example, most classification tasks based on deep convolutional networks currently rely on features 𝒳 from limited training samples with an unknown probability distribution p(𝒳). Nevertheless, recent works find that certain fundamental concepts used to derive the optimal coding rate can still be applied to estimate the coding rate. Ma proposed the nonasymptotic rate distortion in MCR^2 <cit.> to accurately estimate the number of bits required for coding finite samples from class-subspace distributions. For our work, in the practical training task of image retrieval, according to nonasymptotic rate distortion, we can obtain the equation for the average Gaussian coding rate of features as follows: R(𝒳_bs, ε) = 1/2log( I + d/n_bsε^2𝒳_bs^⊤𝒳_bs), 𝒳_bs^⊤𝒳_bs∈ℝ^d × d, where n_bs represents the batch size, ε represents precision and 𝒳_bs represents the features of a current batch in the iteration. The notation log represents the use of the logarithm of the determinant as a smooth approximation for rank in solving rank minimization problems. It guarantees convergence to a local minimum <cit.>. Then, according to Eq.(<ref>) and the Minimum Description Length criterion <cit.>, we can estimate the total number of bits required for features learned by a deep network with a quantity of N and an embedding dimension of d, i.e., the rate-distortion coding rate Γ(𝒳, ε): Γ(𝒳, ε) = (N+d)R(𝒳, ε) = N+d/2log( I + d/Nε^2𝒳^⊤𝒳). According to the commutative property of the coding length function introduced in <cit.>, the matrices 𝒳_bs𝒳_bs^⊤ and 𝒳_bs^⊤𝒳_bs share the same nonzero eigenvalues, the average coding length function can also be expressed as follows: R(𝒳_bs, ε) = 1/2log( I + d/n_bsε^2𝒳_bs𝒳_bs^⊤), 𝒳_bs𝒳_bs^⊤∈ℝ^N × N. R(𝒳, ε) can be used to assess the compactness of the embedding space during training. Our work focuses on utilizing this coding rate measure to avoid or mitigate the issue of collapsing volume in the embedding space during training. By referring to Eq.(<ref>), we can observe that when the sample features are normalized, 𝒳_bs𝒳_bs^⊤ can be represented as Sim (𝒳_bs,𝒳_bs) (abbreviated as Sim (𝒳_bs)). The cosine similarity Sim (𝒳_bs) is an important metric in metric learning for representing the relationships between samples. Therefore, we can rewrite the equation as follows: R_pair = 1/2log( I + d/n_bsε^2 Sim(𝒳_bs) ), O(R_pair)= n_bs^2, where O(R_pair) represents training complexity. This equation establishes a connection between the optimization problem of information-theoretic coding rate and deep metric learning, allowing us to optimize the embedding space by controlling the coding rate. Based on this attribute, we leverage it to design a new loss function, the equation of which is as follows: ℒ^pair( 𝒳_bs,ε) = -R_pair. This loss can not only be combined with supervised deep metric learning methods using pairs or sample tuples, but also serve as an independent loss for unsupervised training to accomplish classification or clustering tasks. §.§ Proxy-Based Anti-Collapse Loss The Eq.(<ref>) can maximize the utilization of sample information to compute the average encoding rate of samples. However, it also need to face one of the drawbacks mentioned earlier in deep metric learning methods based on pairs: too many pairs are involved, resulting in redundant information and high computational complexity. In contrast, proxy-based methods leverage only a small number of class proxies and relationships between samples, which leads to lower training complexity, better generalization, and faster convergence during training process. For example, ProxyNCA <cit.> employs only one real sample feature as the anchor sample x_a and replaces the traditional positive sample x^+ and negative sample x^- from the triplet structure with positive proxy p^+ and negative proxy p^-. The equation is as follows: ℒ_PNCA = - log( e^(Sim(x_a,p^+))/∑_p^-∈𝒫^- e^(Sim(x_a,p^-))). However, the ProxyNCA has a significant problem in terms of tuple composition. The real sample feature x_a in the triplet (x_a, p^+, p^-) cannot establish a direct relationship between real samples, like methods based on pairs or tuples. This tuple structure provides better robustness when dealing with adversarial perturbations and significantly reduces the amount of information the model can learn from real data. ProxyAnchor <cit.> addresses this issue by constructing triplet structures based on proxy points in the opposite way compared to the ProyxNCA. The equation for ProxyAnchor (PA) is: ℒ_PA = 1/|𝒫^+|∑_p ∈𝒫^+log( 1 + ∑_x ∈𝒳_bs, y_x=y_p e^-α· [Sim(x,p_a) - δ]) + 1/|𝒫|∑_p ∈𝒫log( 1 + ∑_x ∈𝒳_bs, y_x ≠ y_p e^α· [Sim(x,p_a) + δ]), where δ is a margin and α is a scaling factor. Inspired by proxy-based methods, we construct a new proxy-based loss for efficient coding rate calculation, significantly reducing computational overhead. The equation is as follows: R_proxy = 1/2log( I + d/n_pε^2 Sim(𝒫) ) , O(R_proxy(𝒫_bc)) =n_bc^2 or  O(R_proxy(𝒫_ac))=n_ac^2, where n_p represents the number of proxies, 𝒫_bc represents the corresponding proxy set of classes contained in the current mini-batch of data samples, and 𝒫_ac represents the proxies for all classes. n_bc and n_ac represent the number of proxies in two different proxy sets respectively. The proxy 𝒫 are artificially defined sets of features. Therefore, without establishing a connection with the feature points of the dataset's samples, they cannot directly influence the overall structure of the data. Hence, Eq.(<ref>) needs to be combined with a new function based on proxy for constructing a deep metric learning loss function. We thus define a Proxy-based Anti-Collapse Loss as follows: ℒ_AntiCo^proxy(𝒫, 𝒳,ε) = -R_proxy(𝒫,ε) + νℒ_proxy(𝒫,𝒳), where Anti-Collapse is abbreviated as AntiCo and ν represents the weight of the proxy-based loss ℒ_proxy (, ℒ_PNCA in Eq.(<ref>)). According to Eq.(<ref>), we divide ℒ_AntiCo^proxy into ℒ_AntiCo^proxy(All-Class) and ℒ_AntiCo^proxy(Mini-Batch) based on our selection of proxies. By comparing the training complexities in Eq.(<ref>) and Eq.(<ref>), we can observe that our proposed Proxy-based Anti-Collapse Loss greatly reduces the computational complexity of algorithms, especially when using only the number of sample classes in the mini-batch. In addition, the proxies processed by Eq.(<ref>) can maintain a huge inter-class distance gap in the embedding space, which allows the proxy to have better guiding ability for sample classification. Compared to the rate reduction function in MCR^2 and existing pair-based deep metric learning methods, the Anti-Collapse Loss demonstrates significant superiority. It leverages the characteristics of proxies, significantly reducing computational complexity by using sample-proxy and proxy-proxy pairs as training units. Additionally, the Anti-Collapse Loss enhances the guidance capabilities of proxies in existing proxy-based methods by boosting the average encoding rate. These proxies effectively maintain the structure of the embedding space, enhancing discriminability among samples from different classes and consequently improving image retrieval performance. § EXPERIMENTS §.§ Experimental Settings Datasets Settings: We select three commonly used datasets for image retrieval experiments in metric learning: CUB200-2011 (CUB200) <cit.>, Cars196 <cit.>, and Stanford Online Products (SOP) <cit.>. For CUB200 dataset, we use the first 100 classes, which include 5,864 images for training, while the remaining 100 classes, with 5,924 images, are reserved for testing. We divide the Cars196 dataset into two parts: 8,054 images from the first 98 classes are used for training, and 8,131 images from the remaining 98 classes are used for testing. For Stanford Online Products dataset, we allot 59,551 images derived from 11,318 classes for training, whereas the remaining 60,502 images from 11,316 classes are utilized for testing. Image Preprocessing: To ensure fairness comparison, we preprocess the training set images based on the experimental settings of the majority of existing deep metric learning research (, <cit.>). Initially, all input images are resized to dimensions of 256×256 and horizontally flipped. Subsequently, these images are randomly cropped to a size of 224×224. Similarly, following previous works, the test images are processed using a central cropping operation. Proxies Settings: Based on existing proxy-based deep metric learning methods <cit.>, we allocate one proxy per semantic class in the dataset, and initialize the proxies using a normal distribution. Hyperparameter Settings: In the experiments, the hyperparameters in Eq.(<ref>) are set to ε=0.5, ν∈[0.001,0.1]. The hyperparameters in the ℒ_proxy are set to δ=0.1, α=32, by following <cit.>. Hardware Configuration: All experiments are conducted on two 24GB NVIDIA GeForce RTX 3090 GPUs. Backbone and Parameters: ResNet50 (R50) <cit.> and BN-Inception (IBN) <cit.> have been commonly used as backbone networks in deep metric learning. This work also selects these two networks as the backbone. Both networks are loaded with pretrained model parameters trained on ImageNet <cit.>. We can obtain different embedding dimensions for sample features 𝒳 by varying the size of the last fully connected layer in the backbone network. We use Adam as the optimizer for the image retrieval experiments to train the backbone network, with a learning rate of 10^-5. We employ a large learning rate multiplication strategy for training the proxies. Except for the experiments comparing image retrieval performance under different batch sizes, the batch size for each experiment is set to 90. Evaluation Criteria: To measure the performance of Anti-Collapse Loss in image retrieval tasks, we evaluate the method using Recall@K(%) and Normalized Mutual Information (NMI) score. Recall@K(%) refers to the percentage of images that retrieve at least one correctly matched sample from the top K nearest neighbors, while the NMI score is obtained by computing the ratio of mutual information to the average entropy between clustering results and ground truth labels. §.§ Experimental Results We select a variety of mapping dimensions for testing, allowing us to conduct fair comparisons with other methods. The experimental results are present in Table <ref> and Table <ref>. It's worth mentioning that the use of dual pooling operation in the backbone network is quite common in image retrieval tasks based on deep metric learning, but some works do not explicitly state this. Hence, we could not add the ⋆ mark to some of the methods. From Table <ref> and Table <ref>, we can observe that the Anti-Collapse Loss achieves the best recall rate (Recall@1(%)) and normalized mutual information (NMI) on each dataset. We first analyze the part of the experiment with ResNet50 acting as the backbone network. When the feature dimension is 512, the recall results (%) of Anti-Collapse Loss are 71.7 VS 70.6 (+1.1) on CUB200, 90.5 VS 89.6 (+0.9) on Cars196, and 81.2 VS 80.4 (+0.8) on SOP. Under the setting where the feature dimension is 128, our method results are 68.3 VS 67.3 (+1.0) on CUB200, 87.1 VS 83.5 (+2.5) on Cars196, and 79.8 VS 77.2 (+2.6) on SOP. When the network is BN-Inception, and the feature dimension is 512, our experimental results on CUB200, Cars196, and SOP are 70.8 VS 69.6 (+1.2), 89.5 VS 90.3 (-0.8), and 80.1 VS 80.1, respectively. The above descriptions are the experimental results under the experimental settings that are more frequently used in existing deep metric learning methods. Besides, we can also notice the competitive image retrieval performance of Anti-Collapse Loss under other settings in Table <ref> and Table <ref>. §.§ Anti-Collapse and Convergence Performances In addition to demonstrating the performance of our method in image retrieval, we also verify its ability to optimize the embedding space. To this end, we utilize three code rate metrics to evaluate the performance of the Anti-Collapse loss, including the global Gaussian coding rate R_global <cit.>, the intra-class coding rate R_intra, and the proxy coding rate R_proxy. Additionally, we also use the embedding space density ρ_density = ρ_intra(d_pos)/ρ_inter(d_neg) <cit.> as one of our evaluation indicators. Here, ρ_intra represents the average intra-class distance, and ρ_inter represents the average inter-class distance. A larger value of ρ_density indicates a more sparse distribution of intra-class samples and a more compact distribution of inter-class samples, indicating better generalization performance and embedding space structure. It is important to note that ρ_density is a relative parameter with certain limitations. When the model is only loaded but not yet trained, dispersed samples can also result in a higher embedding space density. Therefore, we use the embedding space density ρ_density when achieving the best recall rate Recall@1 (%). From Table <ref>, we can observe that in both ProxyAnchor <cit.> and NIR <cit.> methods, the class proxies experience collapse in the embedding space, i.e., the coding rate of proxies decreases when achieving the maximum recall rate. In contrast, our Anti-Collapse method maintains a higher coding rate. To more comprehensively present the changes in the embedding space, we record the variations in the coding rate of the proxies throughout the entire training process. The experimental results are given in Fig. <ref>. By observing Fig. <ref>, we can notice that Anti-Collapse effectively prevents the collapse of proxies in the embedding space. It ensures that the coding rate of the proxy remains relatively stable throughout the entire training process, with a mean of 60.74 and a variance of 0.1. In contrast, the coding rates of the other two methods continuously decrease, indicating the collapse of the proxy set in the embedding space. Additionally, by observing the time it takes for each method to achieve optimal recall rate, we can see that Anti-Collapse achieves the best retrieval results as early as the 38th epoch, while ProxyAnchor Loss and NIR Loss achieve their best recall rates at the 83rd and 90th epochs, respectively. This experimental result demonstrates the excellent ability of Anti-Collapse to accelerate convergence. The experimental results of the distribution for similarities between samples are present in Fig. <ref>. From Fig. <ref>, we can notice that when achieving optimal retrieval performance, the Anti-Collapse loss exhibits a more compact distribution of intra-class and inter-class similarities than the other two methods. Fig. <ref> presents the similarity matrix of proxies. Specifically, Fig. <ref> (a) displays the similarity between proxies during initialization, while the other three sub-figures show the similarity between proxies for different methods when achieving maximum Recall@1. The color is darker in the similarity graphs of Fig. <ref> (b) and Fig. <ref> (c), indicating that the algorithms of NIR and PA have insufficient inter-class discrimination capability. Our Anti-Collapse Loss encourages proxies to be as orthogonal as possible in the embedding space (demonstrated in Fig. <ref> (d)), enabling our approach to achieve better classification capabilities. §.§ Transfer Performance Generalization performance is an important aspect of the ability of deep metric learning, which involves representation learning. One significant objective of our proposed Anti-Collapse Loss is to prevent the model from overfitting to the training set by maintaining the structure of the embedding space. To this end, we conduct generalization performance experiments on the Anti-Collapse Loss and compare it with some existing methods with good image retrieval performance. To be specific, we still choose the first 100 classes of the CUB200 dataset for this experiment as the training set. As for the test set, we select another commonly used fine-grained bird dataset called NABirds <cit.>. This dataset comprises a total of 48,562 images belonging to 555 bird species. We divide them into three test sets based on category indexes, each containing 200, 200, and 155 classes of bird images, respectively. In the experiment, we use ResNet50 as the backbone network for training, with an embedding dimension of 512 and a batch size of 90. As shown in Table <ref>, we observe that the Anti-Collapse Loss achieves the best retrieval performance on all three test sets. For the retrieval metric Recall@1(%), our proposed Anti-Collapse loss outperforms PA+NIR <cit.> by 1.2, 1.6, and 1.7, respectively. These results demonstrate that the Anti-Collapse Loss enables the model to achieve better generalization performance by continuously maintaining the structure of the embedding space during training. §.§ Ablation Studies Parameter Sensitivities: We investigate the impact of parameters ν and embedding dimension on the retrieval performance of Anti-Collapse Losses. Fig. <ref> in the left subtable demonstrates the influence of different values of ν on the image retrieval performance in the CUB200 dataset. Our algorithm achieves the highest recall rate of 71.7 when ν is set to 0.0035. Furthermore, by examining the right subtable in Fig. <ref>, we observe that as the dimension increases, the effect of retrieval performance gradually diminishes across all three datasets, with the algorithm's performance reaching a plateau at around 1024 dimensions. Different Anti-Collapse Loss Terms: For performance comparison among different Anti-Collapse Loss Terms, we use ResNet50 as the backbone and set the embedding dimension to 512. We record the results of Anti-Collapse Losses in the CUB200 and Cars196 datasets in Table <ref>. From Table <ref>, we can notice that when using only the pairwise Anti-Collapse Loss, the model's improvement in terms of R@1(%) on two datasets is as follows: 59.8 VS 53.3 (+6.5) and 46.2 VS 43.4 (+2.8). Regarding the proxy-based Anti-Collapse loss, we present the effectiveness of proxy-based loss when it is not used. Additionally, we present commonly used metrics for classification clustering analysis. By observing Table <ref>, we can find that Eq.(<ref>) possesses self-supervised training capability. Existing metric learning losses exhibit significant performance improvement when combined with it. By comparing the two proxy-based Anti-Collapse Loss approaches, we observe that the version utilizing only the labels corresponding to the samples in the mini-batch achieves the best performance among the three Anti-Collapse Loss methods. It achieves R@1(%) of 71.7 and 90.5 on CUB200 and Cars196 datasets, respectively. This also indicates that optimizing proxies in a more targeted manner can yield better gains in image retrieval performance. §.§ Integrated Vision-Language Models Experiment Recently, methods based on large pre-trained vision-language models have achieved remarkable results in various visual classification tasks. This part attempts to integrate large pre-trained vision-language models with our proposed Anti-Collapse Loss. Our experiments utilize pre-trained language models to convert class labels into language embeddings. By aligning visual and language similarity matrices, we guide the learning of visual embedding spaces to enhance the semantic consistency and generalization ability of deep metric learning. Specifically, we combine the class labels in the training set with a simple prompt template, “a photo of {label}" to describe each image class. These sentences are then mapped into the language embedding space using a large pre-trained language model. Consequently, each class in the training set is assigned a corresponding textual feature. We establish higher-level semantic relationships between samples in mini-batches by calculating a similarity matrix through their label text features. The size of this similarity matrix matches that of the visual similarity matrix, which is n_bs^2. We then optimize the KL-Divergence between the language and visual similarity matrices to leverage language similarities for guiding visual metric learning. We employ CLIP's <cit.> text encoder (ViT-B/32) as the backbone network for generating textual features, while ResNet50 serves as the backbone network for extracting image features. The features are normalized (L2 regularization) to the unit hypersphere after being output. We test the performance metrics of CLIP+AntiCo Loss and CLIP+ProxyAnchor combinations on CUB200 and Cars196. The results, presented in Table <ref>, show that after integrating CLIP, our proposed method's Recall@1 improves by 0.3% on CUB200 and by 0.4% on Cars196. These experimental results demonstrate that large pre-trained vision-language models can enhance the performance of deep metric learning methods. §.§ Qualitative Results Enhancing image retrieval performance is the primary objective of optimizing the embedding space with the Anti-Collapse Loss. In addition to quantitative comparative experiments, we also present qualitative retrieval results of our method on three commonly used image retrieval datasets: CUB200, Cars196, and SOP (Stanford Online Products), as illustrated in Fig. <ref>. These three datasets exhibit diverse sample poses, significant variations in sample quantities. Furthermore, there are unique image characteristics in each dataset. In CUB200, images have complex background with minor inter-class differences. In Cars196, there are noticeable color variations within sample classes. SOP features limited data per sample and substantial viewpoint changes. The qualitative results illustrated in Fig. <ref> provide compelling evidence that our Anti-Collapse Loss consistently delivers outstanding retrieval performance, even across datasets exhibiting distinct characteristics. § DISCUSSION The Anti-Collapse Loss proposed in this paper introduces a promising new direction for research in deep metric learning. It enhances the sparsity of feature clusters by maximizing the average coding rate of proxies, thereby alleviating the collapse issue in the embedding space. The method enhances inter-class differences by optimizing proxies, thereby improving classification and clustering performance. However, the relationship between proxies and samples of the same class is not the main focus of this paper, which may limit further improvement in method performance. Proxy-based methods optimize models based on the non-bijective similarity between sample-proxy pairs. This characteristic enhances computational efficiency but may also induce the isotropic distribution of features among samples of the same class due to the guiding role of proxies. The discriminative ability of intra-class samples significantly impacts the performance of classification and clustering methods. In future work, we will focus on addressing the challenges faced by proxy-based methods in handling the local relationships among samples of the same class, aiming to provide more comprehensive optimization for proxy-based methods. § CONCLUSION In this work, we designed a novel loss function called Anti-Collapse Loss to address the issue of embedding space collapse in existing deep metric learning methods, which was caused by insufficient attention to the global structure of the embedding space. Specifically, we developed three versions of this loss, two of which were aimed at class proxies, and one was targeted at samples. These loss functions optimized the spatial structure of all samples or class proxies in the embedding space by maximizing their coding rate. Our proposed approach performed remarkably well on three commonly used image retrieval datasets, further highlighting the significance of optimizing the overall structure of the embedding space. § ACKNOWLEDGMENTS This work was supported by the 173 Basic Strengthening Plan Technology Field Fund Project (2022-JCJQ-JJ-0221) and National Defense Science and Technology Commission Basic Research Project (JCKY2021208B043). IEEEtran
http://arxiv.org/abs/2407.02341v1
20240702150909
Performance Analysis and Comparison of Full-Fledged 5G Standalone Experimental TDD Testbeds in Single & Multi-UE Scenarios
[ "Maryam Amini", "Catherine Rosenberg" ]
cs.NI
[ "cs.NI" ]
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Performance Analysis and Comparison of Full-Fledged 5G Standalone Experimental TDD Testbeds in Single & Multi-UE Scenarios Maryam Amini, and Catherine Rosenberg, Fellow, IEEE, July 8, 2024 ========================================================================================================================== § ABSTRACT Open-source software and Commercial Off-The-Shelf hardware are finally paving their way into the 5G world, resulting in a proliferation of experimental 5G testbeds. Surprisingly, very few studies have been published on the comparative analysis of testbeds with different hardware and software elements. In this paper, we first introduce a precise nomenclature to characterize a 5G-standalone single-cell testbed based on its constituent elements and main configuration parameters. We then build 30 distinct such testbeds and systematically analyze their performance with an emphasis on element interoperability (by considering different combinations of hardware and software elements from different sources), the number and type of User Equipment (UE) as well as the Radio Access Network hardware and software elements to address the following questions: 1) How is the performance (in terms of bit rate and latency) impacted by different elements? 2) How does the number of UEs affect these results? 3) What is the impact of the user(s)' location(s) on the performance? 4) What is the impact of the UE type on these results? 5) How far does each testbed provide coverage? 6) And finally, what is the effect of the computing resources available to each open-source software? This study focuses on TDD testbeds. 5G-SA experimental testbed, 5G open-source, 5G COTS, Performance analysis. § INTRODUCTION A fundamental shift in the architecture of mobile networks has happened, spearheaded by the 3GPP, which has introduced a novel, dis-aggregated, and open architecture for the fifth-generation (5G) of cellular networks to enable MNOs to source solutions tailored to their specific needs from various vendors. The main elements of a 5G system remain the RAN, the 5GC, as well as the UE, but all of them have gone through significant transformations. The advent of the NG-RAN <cit.>, a dis-aggregated architecture composed of several vendor-neutral elements connected by open, standardized interfaces with a major shift towards softwarization, marks a pivotal shift for cellular networks. Concurrently, the core network has adapted to host a multitude of novel NFs designed to accommodate the diverse services envisioned for 5G. On the UE side, a proliferation of devices with significantly differing features creates new challenges for the network. In this context, experimental testbeds have become crucial for test and validation and to verify interoperability, and pinpoint any gaps in the design of the different elements of this open, and dis-aggregated architecture. These testbeds benefit from the latest developments of: * Software-defined radios (SDRs) that are hardware devices that serve as the radio component of 5G testbeds. Without them, over-the-air transmissions would need to be simulated, which would defeat the whole purpose of implementing a full-fledged experimental testbed. * Several sophisticated software platforms and tools that have emerged both for the 5GC and the NG-RAN. Specifically, open-source frameworks, such as srsRAN <cit.>, which focuses on the RAN, and OAI <cit.>, which offers both RAN and 5GC components[Both these platforms also offer 4G solutions.], have gained significant momentum. This paper only focuses on 5G-SA single cell testbeds that uses TDD. We consider and build 30 such testbeds, varying in the software/hardware element of the RAN as well as the number, and the type of the connected UEs. Note that we have kept the 5GC the same in all those testbeds because we have shown in a previous paper <cit.> that performance is not really impacted by the core and the different RAN elements do interoperate well with the 5GC that we have tried. Most of the papers on 5G-SA open testbeds focus on studying the performance of single-UE scenarios in terms of bit rate and latency, with occasional consideration given to coverage. Very few address interoperability and, to the best of our knowledge, no one addresses multi-UE scenarios conducted in multiple locations to examine the impact of location on the performance of different types of UEs. A comprehensive study of multi-UE 5G-SA testbeds is yet to be done to fully unveil the potential of these experimental platforms and the interoperability of the different elements. This paper aims at shedding light on the impact of each element on the overall performance of a testbed in the context of multi-UE scenarios. It examines how the location and the type of each UE plays a role in performance. It also studies the interoperability of different types of UEs with different hardware and software elements of the RAN. This paper synthesize and extensively expand our preliminary works reported in  <cit.>. Specifically, we have: * Built and studied 28 single-cell 5G-SA TDD testbeds, each differing by the combination of RAN elements (software and SDR) and number and types of UE(s) being used. We evaluated these testbeds from an interoperability perspective as well as from a performance standpoint, using well-defined quantitative and qualitative metrics, including data rate, latency, and coverage. * Explored the multi-UE case for different locations systematically, starting from good locations (please see Sec. <ref> where we explain what we mean by “good”) and progressively making the locations worse. * Built two additional testbeds to evaluate the computational resource consumption of each software platform as the number of connected UEs increases, by changing the PC on which the software platforms are hosted, offering a nuanced perspective on their strengths and limitations. This analysis aids researchers and practitioners in making informed decisions when selecting the appropriate software platforms and their host computing nodes, for their specific use cases. The rest of the paper is structured as follows: In Section <ref>, we give the necessary background and present our nomenclature. Section <ref> provides a comprehensive review of the relevant literature. In Section <ref>, we introduce the different 5G-SA elements that we will consider. Section <ref> presents the metrics used for our assessments, followed by the description, methodology and results for each test scenario. Section <ref> concludes the paper. An acronym table is given at the end of the paper. § BACKGROUND AND NOMENCLATURE   In this section, we present the background material as well as the nomenclature used in the paper to fully characterize a single cell 5G-SA testbed. As mentioned earlier, the three primary elements of any cellular network are the core network, the RAN, and the UE. UEs are devices that can be very different in terms of characteristics but they are all equipped with a SIM card and seek connection to the cellular network. The RAN provides access to the wireless medium to facilitate communication between the UEs and the core network. Finally, the core network is where all service and management aspects are handled. It also serves as the hub to connect the UEs to any external data network, including the Internet. The migration from LTE to 5G is not straightforward, as both the RAN and core have been significantly changed. Indeed, all LTE base stations and most of the core network should be replaced for a cellular network to be full-fledged 5G. Consequently, MNOs have opted to transition to 5G in two phases, first from LTE to 5G-NSA and then from 5G-NSA to 5G-SA. In 5G-NSA, the core and control plane are LTE-based, while the data plane follows 5G standards. This enables MNOs to integrate 5G base stations into their existing LTE network to handle the data plane and gradually transition their networks to a complete 5G-SA setup. A simplified illustration of these two phases is provided in Fig. <ref>. ********************************* In a 5G-SA system, the RAN, which used to be a monolithic black box in LTE, has now an open architecture with well-defined sub-elements and interfaces. Apart from the RF element which is hardware-based, all other RAN elements are software-based and can be integrated and executed on a COTS computer. Similarly, the 5GC is characterized by a set of functionalities that are software based and can be executed in COTS computers. Similarly, a UE can be decomposed into a software and a hardware element. Thus, we can define a single-cell, 5G-SA experimental testbed containing n UEs, operating over-the-air, by the set 𝒯 of its software and hardware elements (please see (<ref>)) and a set 𝒞 containing the configuration parameters (please see (<ref>)). 𝒯 = {S_5GC, H_5GC, S_RAN, H_RAN, (S_UE_1, H_UE_1), …, (S_UE_n, H_UE_n)} 𝒞 = {b, B} S_5GC (resp. S_RAN and S_UE_i) is the collection of software sub-elements for 5GC (resp. RAN and the i-th UE), and H_5GC (resp. H_RAN and H_UE_i) is the collection of hardware sub-elements for 5GC (resp. RAN and the i-th UE). b is the band central frequency, and B is the bandwidth. Kindly note that, the value of b has a one to one mapping to the duplexing mode, i.e., TDD or FDD. Fig. <ref> shows the different elements of a 5G-SA testbed. Starting with the RAN, we can decompose it into four sub-elements: two hardware ones: i) a SDR equipped with an antenna system responsible for the RF front-end, ii) a computer to host the RAN software platform; and two software sub-elements: iii) a software platform to run the remaining 5G protocol stack, and iv) the Operating system (OS) of the computer. Clearly, a critical part of the RAN is the SDR. In recent years thanks to the increased availability of SDR devices, fast and cheap implementation of experimental 5G testbeds has become possible. Currently, the three major SDR vendors are: Ettus Research <cit.>, Lime Microsystems <cit.>, and Nuand <cit.>. The two predominant open-source solutions for the S_RAN, are from srsRAN <cit.> and OAI <cit.>. As illustrated in Fig. <ref>, the SDR gets connected to the computer hosting S_RAN through a wired connection. Then, the SDR will exchange the radio samples with the RAN software platform using a driver installed on the host computer. In the past couple of years, 5GC solutions developed by OAI <cit.>, Open5GS <cit.> and free5GC <cit.>, have gained significant popularity among researchers. Each of these solutions supports different sets of 5GC NFs. However, they all contain the essential NFs required to implement an E2E experimental testbed with basic functionality. These necessary NFs are: Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF), Unified Data Management (UDM), Unified Data Repository (UDR), Authentication Server Function (AUSF), and NF Repository Function (NRF), ensuring UE registration, authentication, Packet Data Unit (PDU) session establishment and management, and Non Access Stratum (NAS) security. Also, much like the S_RAN, the S_5GC needs a host computer and its OS for execution. Lastly, on the UE front, there are three possible options. The most obvious one is to use a phone. Unfortunately, often, phones that are 5G-SA compatible cannot associate with experimental testbeds. Some common reasons for such behavior include a discrepancy between the set of 5G-SA bands supported by the phone and the bands supported by the S_RAN, and the phone's failure to detect specific PLMN identifiers. We found a 5G phone that was able to work in 5G-SA mode with all testbeds. Please see later for its description. The other two options use a computer to host some of the UE protocol stack. In the computer-based UE options that we have used, the computer is connected to a 5G modem that acts both as an RF front-end and as a host for the lower layer protocols (up to Layer 3). Another computer-based UE option is to use an SDR as an RF front-end while the computer hosts S_UE (both srsRAN and OAI offer UE-based software platforms). As discussed in <cit.>, 5G modems have proven to be more convenient for testing than phones due to their support for multiple 5G-SA bands, their ability to associate with non-public networks, and their ease of configuration. For completeness, we note that another sub-element of the UE is the SIM card. The use of a programmable SIM card enables the modification of authentication information on the UE, based on the testbed's requirements. With respect to the testbed depicted in Fig. <ref>, please note that, often, the number of computers is reduced by putting multiple software platforms (e.g., S_RAN and S_5GC) on the same computer. In this study, we will build, analyze and compare the performance, measured on the UE-side, of different single-cell 5G SA testbeds using TDD, i.e., made of different combinations of elements and sub-elements. The elements/sub-elements under-study are described in Section <ref>. We will also show how computational resources will affect the performance of the testbed and how different software platforms utilize those resources. To keep this study tractable (in terms of number of testbeds) and due to page limitation, we have restricted ourselves to TDD-based testbeds. A similar study on FDD testbeds is planned for future. § LITERATURE REVIEW   In this section, we provide a comprehensive overview of the existing literature related to experimental open 5G-SA testbeds. While there are a number of publications on this subject, our study deliberately narrows its focus to papers featuring experimental testbeds equipped with all essential elements for an operational functionality, as opposed to simulating or emulating parts of the testbed. Next, we first review papers that have focused on a single full-fledged 5G-SA testbed and then those that dealt with comparisons of full-fledged testbeds. §.§ Targeted Studies on the performance of a single testbed Haakegaard et al. focus on a 5G-SA testbed, operating in TDD mode, employing Open5GS and srsRAN in  <cit.>. This study compares the theoretical and achieved performance of the testbed in terms of UL & DL bit rates, latency, and coverage. Additionally, the authors study the effect of several radio parameters, such as the number of SDR antennas, the bandwidth, and the Time-Division Duplexing (TDD) frame structure, on the performance of the testbed. In <cit.>, Bozis et al. present their 5G-SA testbed, operating in TDD mode over band n78 which features the 5GC, RAN, and UE solutions from the OAI project. In this testbed, two USRP N310 devices are utilized for the SDR-based UE and the RAN. The RAN and UE are connected through RF cables instead of wirelessly, over-the-air. This study reports the latency and DL/UL throughput of a single-UE for two bandwidth scenarios. In the evaluation done by Chepkoech et al. <cit.>, the performance of six testbeds operating in LTE, 5G-NSA, and 5G-SA modes are studied with a focus on metrics such as throughput, latency, and signal strength. Notably, the only testbed operating in 5G-SA mode is implemented using srsRAN_4G, and Open5GS, and over band n7, which is an FDD 5G-NR band. Sahbafard et al. provide a comprehensive assessment of a 5G-SA testbed, operating on TDD mode and utilizing OAI for both the RAN and the 5GC platforms in <cit.>. This testbed, uses Quectel 5G modems as UEs. The authors compare the modem's achieved performance while using USRP B210 or N310 as the SDR of choice. They also conduct an analysis of the signal strength, in both single-user and multi-user scenarios, to evaluate the testbed's coverage. The authors of <cit.> provide a tutorial on establishing a slice-aware 5G-SA testbed utilizing srsRAN_Project and Open5GS for the RAN and 5GC software platforms. The paper provides insights into the challenges of integration of different elements to the testbed. Furthermore, it offers valuable information on potential issues faced during the implementation phase, along with troubleshooting strategies for these scenarios. Note that, the authors have not mentioned what specific band they are using for their tests. They are using 30 KHz for subcarrier spacing which is mostly used for TDD bands. §.§ Comparative studies on the performance of multiple testbeds The literature addressing comparative analysis of 5G-SA testbeds with different combinations of elements is quite scarce. While the inherent design of open-source software platforms aims to facilitate interoperability, it is crucial to verify its feasibility, simplicity, and performance. To the best of our knowledge, the only existing studies on this subject are <cit.> as well as our conference papers <cit.>. The authors of <cit.> have provided a comparison between the performance of srsRAN and OAI in three aspects, namely, UE's DL bit rate, latency, and a qualitative comparison of the quality of a video call made by the UE. More so, they assessed the interoperability of the employed open-source RAN and 5GC software. The authors have configured all their testbeds to be operating over band n78, in TDD mode. Additionally, the study highlights the differences in performance between SDR-based UEs and COTS UEs. This study links the differences between the rates achieved by OAI and those achieved by srsRAN to the differences in their Quadrature Amplitude Modulation (QAM) implementation. Later, in Section <ref>, we will show that another explanation might be linked to the fact that the computational resources available to S_RAN have a significant impact on the UE's achieved rate. <cit.> presents a comprehensive assessment of the performance of three 5GC software: Open5GS, Open5GCore, and Amarisoft 5G Core for three types of 5G modems, in terms of both throughput and latency, when the same RAN, Amarisoft 5G RAN is used. In <cit.>, Mubasier et al. have implemented two distinct testbeds. both operating on band n77, which is a TDD band. The first testbed features OAI 5GC and RAN, along with USRP B210 and a host laptop. The second is a testbed utilizing USRP X300 in conjunction with srsRAN and Open5GS. This study then evaluates network connectivity, the performance of the testbed seen by the UE, and also computing resource utilization of the open-source software. <cit.> is another comparative study on the performance of testbeds comprising different elements. In this study OAI, and srsRAN were the focus, and a comparative analysis of their features, as well as quantitative results in terms of throughput, signal strength and latency were discussed for two 5GC software, namely, Open5GS, and Free5GC, in single-UE scenario. All the tests were set to be conducted over band n78. In our previous study <cit.>, we focused on evaluating the interoperability of 5G open-source software by examining the UE's achieved performance for various combinations of software platforms in a 5G-SA testbed. Our tests were all done over band n78. We showed that the choice of 5GC does not affect the performance observed by the UE. Earlier in <cit.>, we compared two 5G-SA testbeds that were different only in S_RAN for two different SDR devices, namely, USRP B210 & X410. We also studied the effect of the connectivity mode between the SDR and the UE, i.e., wired or wireless. In this study, we selected band n3, an FDD 5G-NR band, for our tests and comparisons. This paper is a comparative study of several 5G-SA experimental testbeds. We have used Open5GS as the fixed 5GC for all the testbeds. Moreover, we employed the same set of configuration parameters, i.e., the same frequency band and bandwidth, for all the testbeds to ease the comparison of the results. The analysis in this study is conducted from two perspectives: * The performance achieved by the UEs. This is an extension of what we did in  <cit.>. The extension is on several fronts: we consider coverage as a new metric, different types of UEs, and different locations, as well as multi-UE scenarios. * The computational resource consumption of the different software elements, and the impact of their host PC on the performance. § ELEMENTS & TESTBEDS UNDER-STUDY   In this section, we will provide a detailed description of the various elements and sub-elements that have been utilized for the 5G-SA testbeds of our comparative study. The list of those elements/sub-elements is given in Table <ref>. The hardware elements of our testbeds are shown in Fig. <ref>. We then introduce the testbeds that we have considered and built for this study. We have considered all possible combinations of 5GC, RAN and UE sub-elements. §.§ The RAN §.§.§ RAN Software Platforms * Platform #1- srsRAN: It comprises open-source 4G and 5G software radio suites developed by the Software Radio Systems team. The project includes two main repositories, namely, srsRAN_4G and srsRAN_Project, both available under the GNU Affero General Public License version 3 (AGPLv3). While srsRAN_4G provides a prototype for 5G-SA, the supported features are minimal, and there will be no further updates. srsRAN_Project though, offers a full 5G-SA solution based on a complete codebase. In this study we have used srsRAN_Project, and we will refer to it as srsRAN in the following. It supports all TDD and FDD bands on Frequency Range 1 (FR1). The latest release of the software (srsRAN_Project 23.10.1) offers the flexibility to configure over 400 parameters in a user-friendly way, which has made working with srsRAN particularly convenient. * Platform #2- OAI-RAN: Developed by the Eurecom team, the OAI software platform provides LTE, and 5G solutions for the RAN, and unlike srsRAN, the 5G core. For clarity, we will refer to the RAN solution from OAI, as OAI-RAN, and the 5GC as OAI-5GC. This project is distributed under the OAI 5G Public License. Compared to srsRAN, OAI-RAN provides more features such as more subcarrier spacing options and support for Frequency Range 2 (FR2). However, configuring OAI-RAN is more complex than configuring srsRAN since it requires modifying code blocks in the configuration file. In this regard, we highly recommend reading <cit.>, where the authors have described how to work with the OAI-RAN configuration file. We have used the “2024.w09” version of the develop branch of OAI's GitLab. §.§.§ RAN Hardware Platforms (SDR) * RAN SDR #1- USRP X410: it is a high-end, all-in-one SDR. It comes with advanced features like four independent Tx and Rx channels, each capable of 400 MHz of bandwidth. The X410 model is equipped with a built-in GPS Disciplined Oscillator (GPSDO) for improved timing synchronization. Additionally, it offers multiple networking interfaces for data and control offloading, such as two Quad Small Form-factor Pluggable 28 (QSFP28) ports supporting data transfer rates of up to 100 Gigabit Ethernet (GbE), along with standard interfaces like Ethernet and USB-C. In our experiments, we utilize one USRP X410 connected to the RAN host computer via two QSFP28-10GB connections and one Ethernet connection to the network. * RAN SDR #2- USRP B210: This single-board, low-cost USRP is a dual-channel transceiver, providing up to 56 MHz bandwidth. B210 comes with a USB 3.0 connector to enable a connection to the RAN host PC. Since this USRP lacks a built-in GPSDO, maintaining synchronization might become a challenge. We have chosen to use a USRP B210 in our tests, as it is arguably the most popular SDR in the research community as of now. Hence, we can gain a clear understanding of what this USRP model offers compared to the high-end X410. §.§.§ RAN Host Computers To investigate the influence of computing resources on the testbed's performance, we utilize the two PCs listed in Table <ref>, featuring different levels of computational power to host the RAN software. * PC #1: The first host computer utilized in our tests is equipped with an 11^th Gen Intel^(R) Core^TM i9-11900K processor, running at the base frequency of 3.50GHz. This system operates on Ubuntu 20.04.6 LTS, featuring kernel version 5.15.0-60-low-latency. * PC #2: The second host is a mini PC featuring Intel^(R) Core^(TM) i7-10700 CPU @ 2.90GHz. This PC also runs Ubuntu 20.04.6 LTS, with kernel version 5.15.0-84-low-latency. Note that the SDR requires a driver installed on the RAN host computer, so that they both can communicate. All USRP products from Ettus use the same hardware driver, called USRP Hardware Driver (UHD). In this study, we have installed UHD_4.5.0.0 on both RAN host computers. §.§ The Core §.§.§ 5GC Software Platforms As discussed earlier, we are restricting the study to a single 5GC software platform, namely, Open5GS. It is a popular core network solution that not only offers a 5GC but also an Evolved Packet Core (EPC) solution, enabling the implementation of 5G-SA, 5G-NSA, and LTE networks. The 5GC solution is based on 3GPP-Rel.17, and contains the following network functions: NRF, Service Communication Proxy (SCP), Security Edge Protection Proxy (SEPP), AMF, SMF, UPF, AUSF, UDM, UDR, Policy and Charging Function (PCF), Network Slice Selection Function (NSSF), and Binding Support Function (BSF). It is open-source and available under AGPLv3. For our test scenarios, we have used Open5GS v2.7.0. §.§.§ 5GC Hardware Platforms The core hardware platform is one of the two PCs described above since we execute both S_5GC and S_RAN on the same host computer. §.§ The UEs We consider three different UEs. * UE_1: Our first UE is a OnePlus Nord CE 2 5G COTS phone, which is 5G-SA compatible. This phone runs Android 11 and supports 11 5G-SA bands. In order to force this phone to operate on 5G-SA mode only, we installed an Android application called 5G Switch - Force 5G Only <cit.>. This application is free and does not require the phone to be rooted. * UE_2: The second UE comprises a Quectel 5G modem, the RM502Q-AE, connected to a host PC. The PC (Intel^(R) Core^TM i7-3770 CPU @ 3.4GHz) is running Ubuntu 20.04 with kernel version v5.14.0. For further details on the challenges encountered during the setup of this UE, please refer to <cit.>, where we have described the necessary configurations for this type of a UE. Note that this UE is not easily movable. * UE_3: The third UE is composed of another Quectel 5G modem (RM502Q-AE) connected to a Dell laptop equipped with a 12^th Generation Intel Core i7-1255U processor running at 1.70 GHz and Windows 11. This choice allows us to investigate potential performance differences between the second and third UEs and determine if these differences can be attributed to their respective host computer operating systems (Ubuntu vs. Windows). §.§ Miscellaneous Last but not least, we utilized sysmoISIM-SJA2 programmable SIM cards from sysmocom <cit.> in this study. These SIM cards are 3GPP-Rel.16 compliant and come with the credentials required for modifying them. Additionally, for configuring our testbed's PLMN, we assigned the Mobile Country Code (MCC), and Mobile Network Code (MNC) values as 001 and 01, respectively. §.§ Testbeds Under-Study Now that we have introduced all the elements under study, we can present the testbeds that we have built. With respect to the definitions of the sets 𝒯 and 𝒞 in  (<ref>) and (<ref>) respectively. We have configured all the tests to be done on b=n78, and B=40 MHz bandwidth, with sub-carrier spacing (SCS) equal to 30 kHz. Please note that, n78 operates in TDD mode. Consequently, it is imperative to carefully configure the same TDD slots and symbols in both S_RAN for a meaningful comparison. We set the frame structure to be: “DDDDDDFUUU”, accounting for 6 DL slots, 3 UL slots, and 1 Flexible slot. Additionally, we picked PC #1 as the host computer, running S_5GC and S_RAN. We have build and analyzed the performance of 28 testbeds that consist of all the possible combinations of the other elements described above with either (any) one, (any) two or the three UEs. Specifically, considering the three UE devices that we have described above, we created seven different UE combinations based on their number and types, i.e., {UE_1}, {UE_2}, {UE_3}, {UE_1, UE_2}, {UE_1, UE_3}, {UE_2, UE_3}, {UE_1, UE_2, UE_3}. Recall that we consider two SDR devices, i.e., {USRP B210, USRP X410} and two S_RAN platforms, i.e., {srsRAN, OAI-RAN}. Thus, using all possible combinations of these three groups, (7 × 2 × 2), we built 28 testbeds. Additionally, to assess the computational resource consumption of the two 5G open-source software, we have built two additional testbeds using a less powerful PC, PC #2, and the two RAN platforms. This setup allows us to evaluate how the performance of each open-source software is impacted by the host PC, and the number of connected UEs. Hence in total, we have build and studied 30 testbeds. § TEST SCENARIOS, METHODOLOGY AND RESULTS   In this section, we first define the metrics we use to assess the performance of the 5G-SA testbeds. We then introduce the test scenarios and the corresponding methodology, followed by a presentation of the results §.§ Performance Metrics §.§.§ Data Rate We measure the UL and DL average data rate in Mbps, using iperf3 <cit.>. Each experiment runs for three minutes, and we report the average of the achieved UL and DL data rates. Please note that, to run iperf3 on UE_1 the Android phone, we installed he.net - Network Tools <cit.>. §.§.§ Latency The E2E latency is measured in milli-second (ms), by using the ping command at the UE side. We conducted each test for three minutes and report the average latency between the UE and the 5GC. §.§.§ Coverage We consider the RSRP in dBm to be the measure of the coverage. The coverage tests were conducted for UE_1. We used the Android application 5G Switch - Force 5G Only <cit.>, on UE_1, the mobile phone, to report the RSRP. §.§.§ Computational resource consumption Finally, to monitor how each RAN software platform consumes the computational resources of its host computer, we use the top command. This allows us to observe the running processes, and the overall host computer resource utilization (CPU and memory). We run this command on the host PC that executes both 5GC and RAN, for three minutes and report the maximum percentage of CPU and memory utilization for each software platform. §.§ Methodology & Results for Data Rate Assessment §.§.§ Methodology For the assessment of the data rate of each UE within each testbed, we had to carefully take the location of each UE into account. The first tests were done when all the UEs of each testbed were located in “good” positions, i.e., at positions where the downlink data rate of a single UE was consistently at its peak (characterized by the highest rate of the existing MCS). The comparison of the performance of the different testbeds when all UEs are in those positions, gives us valuable information on how best the testbeds can perform. Fig. <ref> illustrates the map of the fourth floor of the Centre for Environmental and Information Technology building on the main campus of University of Waterloo where we conducted our tests. Our lab is in room 4148, and we have indicated the location of the SDR by a star sign on the map. We first identified three good positions in our lab, all in the vicinity of the SDR device. We have marked the three selected positions for the UEs in Fig. <ref> as A1, A2, A3. UE_1 was placed at A1, UE_2 at A2, and UE_3 at A3. Recall that UE_2 is not easily movable and hence was kept at A2 for all tests. Hence, for all other locations, we only checked the rates seen by UE_1 and UE_3. The results of this initial round of tests where the UEs are in good positions, with either one, two or three UEs are presented in Table <ref> (please refer to tests {T1,T2,T3,T8,T9,T10,T15}). Next, to study the impact of locations on the data rate observed by the UE(s), we selected two additional positions where the drop in the data rate was significant enough to categorize the positions as “fair”, and “bad”. In this regard, after multiple preliminary tests conducted using UE_1, and UE_3, position D on the map was selected as the position which would yield a “fair” rate, with MCS values observed between 15 to 17 and position E as the “bad” position, with MCS values observed between 9 to 11. The single and multi-UE results corresponding to these tests are presented in Table <ref>, with test ids {T4,T5,T6,T7,T11,T12,T13,T14,T16,T17}. Note that the other positions in Fig. <ref> are used for our coverage study. In order to keep the number of tests reasonable and the size of the tables manageable, we only used 14 of the 28 testbeds to study the impact of location on performance, by fixing the SDR to USRP B210 in this first round of tests. A second round of tests to compare the SDR, is described later in the paper. §.§.§ Results on tests conducted on “good” locations * Impact of the RAN software: Throughout our tests we observed that srsRAN delivers much higher UL rates, while OAI-RAN performs better on the DL, regardless of the type of the SDR and the number or the type of connected UEs. * Impact of the type of UE: Our tests indicate that for almost all of the single/multi-UE scenarios, irrespective of the RAN software, the two modem-based UEs (UE_2, and UE_3), outperform the phone in terms of DL rates. Focusing on srsRAN, we observe that in single-UE scenarios (T1 vs. T2 & T3), UE_1 is receiving 93% of the DL rate of UE_2 and UE_3. Similarly, UE_1 receives 92% and 90% of the DL rate of UE_2 and UE_3, when two UEs are connected at the same time, in T8 & T9. When it comes to T15 (corresponding to the three UEs case), UE_1 is receiving only 84% of the DL rate achieved by the other two modem-based UEs. This pattern is also evident in the results achieved by OAI-RAN even if in the single-UE scenarios, the difference between the DL rates of UE_1 and UE_3 is negligible. Indeed, there is an 8% gap between the DL rates of UE_1 and UE_2. Moving to the multi-UE scenarios with OAI-RAN, we see that the gap between the DL rate of UE_1 and the other two modem-based UEs increases. In two-UE scenarios, T8 and T9, UE_1 achieved 90% and 94% of the DL rate of UE_2 and UE_3, respectively. When all UEs are connected in T15, UE_1 is only able to receive 79% and 77% of the DL rate achieved by UE_2 and UE_3, respectively. The results on the UL are difficult to interpret since on single-UE scenarios, UE_3 does better than the other UEs for srsRAN and worse for OAI-RAN. * Cases with multiple UEs: Throughout our multi-UE tests in “good” locations, we observed that both S_RAN do seem to share the resources roughly equally among the UEs in both UL, and DL directions. For instance, comparing T3, T8, and T15 in the UL direction, we see that if srsRAN is used, the maximum UL rate achieved in a single-UE scenario is 41.3 Mbps. In the two-UE scenario (T8), UE_1 and UE_2 receive 45% and 43% of this rate, respectively. Moreover, in T15, when all UEs are connected, UE_1, UE_2, and UE_3, they can send 24%, 26%, and 34% of the maximum achieved UL rate in the single-UE scenario. If OAI-RAN is used, the maximum UL rate achieved in single-UE scenarios is 24.5 Mbps. We see that in T8, UE_1 and UE_2 each can send 56% and 48% of this maximum rate, respectively. In T15, UE_1, UE_2, and UE_3 each can transmit 46%, 34%, and 39% of the maximum UL rate. While OAI-RAN performs poorly in the UL, it shows better resource sharing capabilities in multi-UE scenarios. §.§.§ Observations on tests conducted on “good” locations * Impact of the type of UE: Overall, we found it easier to work with modem-based UEs. While srsRAN did not exhibit any apparent differences in the attachment process of the UEs, when working with OAI-RAN, we observed that the phone, UE_1, had a harder time attaching to the testbed and maintaining its connection for three minutes during the tests. We did not observe such a behaviour with the two modem-based UEs, while connected to OAI-RAN. * Effect of multiple UEs: Throughout our multi-UE tests in “good” locations, we noticed that OAI-RAN crashed several times, specifically when the tests were done on the UL direction. §.§.§ Results on tests conducted on “fair” and “bad” locations * Impact of location: The overall observation is that there is a significant difference between the two RAN software platforms: while OAI-RAN adjusts the power automatically, srsRAN provides a static power setting mechanism, which is through setting the Tx/Rx gain values. This is difficult to adjust in scenarios with multiple UEs. We carefully selected the Tx and Rx gains that resulted in the best achieved rates in “good” positions, and maintained those values for the gNB throughout all our tests for all testbeds, and in that case, srsRAN loses its superior performance, in the presence of multi-UEs, each located on a different type of position. As for OAI-RAN, the automatic power adjustment feature provides some consistency in the results based on the locations of the UEs. We see from T1, T4, and T6 that moving UE_1 from A1 to D, and to E causes a 52% and 76% drop in the DL data rate, and 35%, and 36% in the UL rate, respectively. Looking at UE_3 (T3, T5, T7), we see that the drop in the DL (UL respectively) rates from point A3 to D is 40% (20% respectively), and from A3 to E is 67% (40% respectively). OAI-RAN's performance is thus not impacted by the location of the connected UE in the UL direction as much as it is in the DL. The same trend is also seen in the multi-UE scenarios with OAI-RAN. * Impact of the type of UE: The most unexpected observation for us was the fact that srsRAN seems to provide very limited coverage for UE_1. Comparing T4 and T5, as well as comparing T6 and T7, you can see that while UE_3 is able to achieve 51 and 23 Mbps in DL, the phone is only getting around 7 and 3 Mbps respectively. This trend is also evident in multi-UE scenarios with srsRAN. On the OAI-RAN side, we did observe this trend, but it was much less pronounced. For instance, comparing T4 and T5, we see that the phone is getting 48 Mbps in the DL, while the modem is achieving around 66 Mbps. There is certainly a difference, but it is nowhere near as drastic as the gap observed with srsRAN. * Effect of multiple UEs: As mentioned in the previous points, since srsRAN does not support automatic power adjustment, the results in “fair” and “bad” locations are very poor. Additionally, the unforeseeable discrepancy between the performance seen by UE_1 and UE_3 in those locations has made this category of results for srsRAN to be haphazard and inconsistent, not revealing any kind of pattern. However, OAI-RAN was more reliable. OAI-RAN seems to share the Physical Resource Blocks (PRBs) equally among users, and the final rate achieved by the UE is then determined by the MCS value. For instance, taking a look at T3, T4, and T11, we see that in the case of T11, the two-UE scenario, each UE is receiving roughly half of what they used to get at the same location in the single-UE scenario, e.g., UE_1 which was receiving about 48 Mbps at D in the single-UE scenario, is now receiving 19.2 Mbps when UE_3 is also connected from location A3, and UE_3 which was receiving about 102 Mbps at A3 in the sinlge-UE scenario, is now receiving 52.9 Mbps at location D, in the two-UE scenario. §.§.§ Observations on tests conducted on “fair” and “bad” locations * Effect of multiple UEs: We have to mention that for both software platforms, we were unable to conduct T17. Despite attempting the test more than seven times, UE_1 was unable to maintain its connection for the duration of the test (three minutes) for each data rate test at location E. §.§.§ Second round of tests to compare the SDRs For the second round of tests, we configured all the tests to be done on “good” locations. The results are presented in Table <ref>. In our tests, we observed that between the two SDRs, USRP X410 yielded better UL rates. Based on the results presented, OAI-RAN shows the largest improvement in UL rates when USRP X410 is utilized (up to 48% improvement). For instance, in a single-UE scenario with UE_3, using {USRP B210, OAI-RAN} results in 20.3 Mbps in UL, whereas using {USRP X410, OAI-RAN} results in a 45% improvement, reaching 29.5 Mbps. In the same scenario, if srsRAN is used, changing the SDR from USRP B210 to USRP X410 results in no improvement. Additionally, during single-UE scenarios, we observe the best DL performance with USRP X410. As an example, when UE_1 is the only connected UE, the combination of {USRP X410, OAI-RAN} results in 13% improvement in DL rate, compared to {USRP B210, OAI-RAN}. For the same scenario and UE, the combination of {USRP X410, srsRAN} improves the DL rate by 8% compared to {USRP B210, srsRAN}. §.§ Methodology & Results for Latency Assessment §.§.§ Methodology To analyze the E2E latency, we followed the same methodology as for the data rate assessments, as in we used the same testbeds, elements and configurations described in the data rate assessments. We also used the same single/multi-UE scenarios with different types of locations. The latency tests were done using the ping command. For each assessment, a ping command from the measuring UE to the IP address of the AMF in the core network was conducted. Based on our observations, having multiple connected UEs did not affect the E2E latency experienced by each UE. Furthermore, the positions of the connected UEs also did not appear to influence the final result. Therefore, to simplify comparisons, we only report on the latencies achieved by different types of UEs in single-UE scenarios, where each UE is located in a “good” position. Table <ref> presents our results. §.§.§ Impact of the RAN software Similar to the data rate tests, we observe the impact of the choice of S_RAN on the E2E latency. Although recent updates for both software platforms have brought the data rates closer together, a significant difference in the latency results persists. OAI-RAN outperforms srsRAN in this regard. Comparing the latency results achieved by UE_1, when connected to USRP B210, we find that when we have the shortest E2E latency results for both S_RAN, the result achieved by OAI-RAN is 70% shorter than that of srsRAN. §.§.§ Impact of the type of UE Table <ref> shows that the type of UE plays some role in the final E2E latency. Notably, UE_1 consistently achieved lower latency compared to the modem-based UEs. Additionally, within the modem-based UEs, UE_2, which is a modem connected to a Linux system shows slightly superior performance compared to UE_3, the same modem, connected to a Windows laptop. §.§.§ Comparison between the SDRs In our tests, we found that other than for two scenarios, the measured latencies were relatively consistent regardless of the SDR device used. The two exceptions are, the latency result achieved by UE_1 with {USRP B210, srsRAN} is 6.9% less than with {USRP X410, srsRAN}, the latency result achieved by UE_3 with {USRP B210, OAI-RAN} is 13% less than with {USRP X410, OAI-RAN}. §.§ Methodology & Results for Coverage Assessment §.§.§ Methodology We assess the coverage of four testbeds using UE_1, specifically focusing on the impact of the SDR and S_RAN. These included combinations of {srsRAN, OAI-RAN} for the S_RAN, alongside {USRP B210, USRP X410}. All the measurement locations are indicated on the map, as shown in Fig. <ref>, by purple circles. At each location, we utilized the 5G Switch - Force 5G Only application on UE_1 and recorded the RSRP values in Table <ref> (Please note that any cell with X as their value in the table, indicate a disconnection). §.§.§ Impact of the software Due to the automatic power adjustment feature, OAI-RAN outperforms srsRAN in terms if coverage support. A notable observation was that in areas with very poor signal strength, such as points L, G, or M, OAI-RAN exhibited a tendency to crash frequently. In contrast, srsRAN gNB at such points continue to run, but the UE is unable to find any connections. §.§.§ Comparison between the SDRs Comparing the coverage maps of the two SDR devices, it is evident that USRP X410 exhibits a superior performance. However, the impact of the S_RAN is more influential. Specifically, the testbed utilizing {OAI-RAN, USRP B210} exhibits a better efficacy compared to {srsRAN, USRP X410}. §.§.§ Duplexing mode Assessment * Methodology: To study the impact of duplexing modes on the performance seen by the UE, we implemented two testbeds. In these setups, we fixed S_5GC as Open5GS and selected USRP B210 as the preferred SDR. Additionally, we put PC #1 as the host PC for executing both S_RAN and S_5GC. For the RAN software, we utilized srsRAN and OAI-RAN in the respective testbeds. Furthermore, we selected UE_2 as the testing UE. The testbeds were configured to operate on two frequency bands: n1 and n78, each with a 20 MHz bandwidth. Specifically, n1 operates at a central frequency of 2.1 GHz in FDD mode, while n78 operates at a central frequency of 3.5 GHz in TDD mode. Table <ref> presents our results. * Observations: * Impact of the duplexing mode on data rate: When it comes to the achieved rates, srsRAN performs exceptionally better in the FDD mode. These observations align with our previous results in <cit.>, where our testbed operated on n3, which is an FDD band at 1800 MHz. With OAI-RAN, we observe a noticeable improvement in the UL rate in FDD mode as well. * Impact of the duplexing mode on data rate: Within the latency domain, there isn't much difference between the E2E latency achieved by srsRAN in FDD or TDD mode. However, we notice that OAI-RAN's average latency almost doubles when operating in the FDD mode. §.§ Methodology & Results for Computational Resource Consumption of the Open-source RAN Software §.§.§ Methodology There are two aspects to the analysis of resource consumption for the open-source software platforms. First, how each RAN software consumes the available computational resources, and second, how the choice of the host PC affects the performance achieved by the UEs. To answer the first question, we set up testbeds using {srsRAN, OAI-RAN} as their S_RAN, Open5GS as S_5GC, and set the host PC for the software platforms as PC #1. Please note that PC #1 is our more powerful computer with 16 CPU cores, running at the frequency of 3.5 GHz. We are specifically interested in determining the maximum resource consumption by each S_RAN in a full buffer scenario in both UL and DL transmissions, for single and multi-UE scenarios. This allows us to gain insight into the worst-case computational consumption scenario for each software platform. §.§.§ Observations on CPU utilization Please see Fig. <ref>, that shows the CPU utilization as a function of the number of connected UEs. * Traffic direction effect, UL vs. DL: At first glance, we observe a trend with OAI-RAN: the DL traffic appears to be less CPU-hungry than the UL. This trend is reversed for srsRAN, i.e., the UL traffic seems to require fewer CPU cores. Another important observation is the significant difference in computational resource consumption between srsRAN and OAI in the DL direction. With a single UE, OAI consumes 73% of one CPU core, whereas srsRAN consumes 1.2 CPU cores, indicating a drastic gap in resource utilization (Please note that none of the S_RAN utilized more than 2 cores of PC #1). * Effect of multiple UEs: There is a gradual increase in the resource consumption for both S_RAN as the number of connected UEs increases. §.§.§ Observations on memory utilization Our results indicate that the CPU is the primary bottleneck resource, while memory usage remains relatively consistent. In all scenarios, OAI-RAN consumed a maximum of 3% of the memory, whereas srsRAN never exceeded 10.2% of the memory. §.§.§ Observations on the effect of the host PC For this round of tests, utilizing the less powerful PC, PC #2, we established two additional testbeds with the two open-source RAN platforms and UE_1. This setup aims to compare the data rate and latency results with those obtained previously using PC #1 as the host PC. Our primary interest lies in observing the performance of each S_RAN under varying CPU budgets, as highlighted in the previous section, where it was noted that not all CPU cores were fully utilized by any of the S_RAN platforms. Please note that, PC #1 operates at a base frequency of 3.5 GHz, while PC #2 runs at a frequency of 2.9 GHz. The results are presented in Table <ref>. * Impact of the host PC on srsRAN: There is a significant difference in the achieved performance of srsRAN when changing the host PC. This effect is mainly evident in the DL rate and the E2E latency experienced by the UE. When srsRAN is running on PC #2, the DL rate drops to almost half of what is achievable when running srsRAN on PC #1 (from 92.1 Mbps to 46.9 Mbps). Additionally, the average latency increases by 37%. These results are consistent with the findings reported in <cit.>, where the authors report achieved DL and UL rates of 123 Mbps and 39 Mbps, respectively, using a host PC more powerful than our PC #1. * Impact of the host PC on OAI-RAN: OAI-RAN appears to be less sensitive to the host PC, as the change in achieved performance when switching the host PC is not as significant as it is with srsRAN. Although there is about a 9% increase in the DL rate, when using the more powerful PC, the UL rate and average latency remain almost the same. §.§ Open Core projects Authors of <cit.>, have introduced a 5GC tester, called My5G Tester[<https://github.com/my5G/my5G-RANTester>] that helps to evaluate the performance, and conformance of multiple 5GC software, including OAI 5GC, Open5GS, and free5GC in different scenarios. The main goal of this tester is to study the performance and conformance of the NGAP, and NAS protocols of 5G. §.§ Open RIC projects Onos, flexRIC, OSC RIC §.§ Open RAN projects This work <cit.> presents the necessary information, to configure an OAI RAN according to specific frequency requirements. (Highly recommend for understanding the PHY configs in OAI) Important note: if you need a time synch. package: chrony, ntp, openntpd. Important note: whenever we talk about the latest version of srsRAN, a.k.a. srsRAN_Project, we just say srsRAN. § CHALLENGES & CONCLUSION   In this paper, we presented one of the most comprehensive studies to date, on the performance achieved by 5G open-source software and COTS hardware across 30 single-cell 5G-SA testbeds. We provided a precise nomenclature to characterize a 5G standalone testbed and a comprehensive set of metrics to assess performance. Our discussions into the performance of each testbed in both single and multi-UE scenarios, highlighted how the type and location of each connected UE impact performance. Additionally, we explored the interoperability of different UE types with various hardware and software elements of the RAN. Finally, we evaluated the computational resource consumption of each software platform in both single and multi-UE scenarios. By defining three groups of locations, “good, fair, bad”, for the connected UEs, we first analyze how each 5G open-source RAN platform performs, given good UE positions. We then scatter the UEs in different locations and observe the performance achieved in adverse conditions. Our findings indicate that if srsRAN is executed on a powerful host PC, its performance can be superior given good UE positions. However, the table turns when UEs from further locations seek connection. In these scenarios, srsRAN not only lacks the automatic power adjustment feature of OAI-RAN, resulting in lower data rates for UEs connected from distant locations, but also exhibits a discrepancy in the achieved performance based on the type of connected UE. In this regard, OAI-RAN being UE-type agnostic and more robust, wins. By analyzing the coverage support of four different testbeds, we revealed that the choice of S_RAN is more influential than the choice of the SDR device on coverage. Our results also showed that OAI-RAN outperforms srsRAN, in the E2E latency. One of the most critical aspects of this paper, as with any experimental study, is the clear definition of the test methodology. This ensures that other researchers can hopefully reproduce our testbeds, and results uisng the elements we have selected and following our steps. §.§ Upcoming Challenges O-RAN, an industry-standard alliance and a significant player in the NG-RAN domain, has introduced a RAN architecture that extends the one proposed by 3GPP with additional elements and interfaces. In their proposed architecture, they introduce three elements to split the 5G protocol stack: the Radio Unit (RU), Distributed unit (DU), and the Central Unit (CU), each responsible for running parts of the protocol stack. Currently, there is a growing interest in the research community not only to build 5G-SA experimental testbeds but also to develop testbeds that comply with the O-RAN standard. At the time of this writing, very few O-RAN Radio Unit devices are available (e.g., the FlexFi O-RU from LITE-ON Technology and Foxconn RPQN). The main challenge in deploying an O-RAN compliant 5G-SA testbed is maintaining synchronization between the RU and DU, a process that requires precise timing and extensive communication. These challenges should be thoroughly addressed in future studies on O-RAN compliant, 5G-SA testbeds. [type=, title=Acronyms, toctitle=List of Acronyms] § ACKNOWLEDGMENT The authors would like to thank the kind volunteers who helped us during our measurement campaigns. [ < g r a p h i c s > ]Maryam Amini received her B.Sc., and M.Sc. degree from the Department of Computer Engineering at Iran University of Science and Technology in 2015 and 2017, respectively. Currently, she is pursuing her Ph.D. in the Department of Electrical and Computer Engineering at the University of Waterloo, Canada. Her research interests include Wireless Communications, Open RAN, and Experimental Testbeds. [ < g r a p h i c s > ]Catherine Rosenberg (Fellow, IEEE) is currently a Professor with the Department of Electrical and Computer Engineering, University of Waterloo, ON, Canada. She is also the Canada Research Chair in the Future Internet and the Cisco Research Chair in 5G Systems. Her research interests include networking and wireless. She is a Fellow of the Canadian Academy of Engineering. More information is available at <https://uwaterloo.ca/scholar/cath>
http://arxiv.org/abs/2407.03201v1
20240703153334
Wideband Coherent Microwave Conversion via Magnon Nonlinearity in Hybrid Quantum System
[ "Jiahao Wu", "Jiacheng Liu", "Zheyu Ren", "Man Yin Leung", "Wai Kuen Leung", "Kin On Ho", "Xiangrong Wang", "Qiming Shao", "Sen Yang" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "physics.app-ph" ]
Clifford Circuits Augmented Time-Dependent Variational Principle Mingpu Qin July 8, 2024 ================================================================= empty § INTRODUCTION Frequency conversion between optical photons has been extensively studied in nonlinear optics, yielding significant applications, including the fabrication of lasers covering all spectra <cit.> and the coupling of multiple quantum systems <cit.>. Much research focuses on achieving conversion between microwave and optical photons to facilitate long-distance quantum communication <cit.>. For the coupling of different solid-state qubit systems, such as superconducting qubits and spin qubits, a typical requirement is frequency matching <cit.>, which is not a common property for solid-state qubits<cit.>. Solid-state qubits typically resonate with microwaves, which can range from several GHz to hundreds of GHz depending on the types<cit.>. These characteristics of solid-state qubits pose a challenge in coupling different quantum systems and building hybrid quantum networks<cit.>. To couple solid-state qubits with different resonant frequencies, coherent conversion between microwave photons in the near field region is especially crucial. Moreover, wideband frequency conversion is also significant for quantum sensing, which can be understood as coupling qubits with the environment. The main approach of microwave sensing involves measuring the spin relaxation time, T_1, of spin qubits<cit.>. This requires a large tunable bias field to shift the resonant frequency and align it with the detected signal <cit.>. Under the magnetic field of the Tesla scale, their resonance frequencies can shift by tens of GHz, attributed to the Zeeman effect<cit.>. Such intense tuning fields risk modifying the intrinsic dynamics of the samples under study, obscuring the phenomena of interest <cit.>. Moreover, the stringent aligned magnetic field greatly hinders the miniaturization of quantum sensing. To solve these problems, we need a new method of microwave conversion that needs to be available over a wide bandwidth range to facilitate the coupling of different quantum systems. The conversion should be passive to minimize thermal noise from traditional active components. Additionally, it needs to be easily integrated on-chip with solid-state qubits to achieve the integration and miniaturization of hybrid quantum systems. Here, we explore a frequency conversion method in the spintronic device that is capable of fulfilling the above requirements. In the field of nonlinear optics, the nonlinear response of the electric field (P=χ^ ( 1 ) E+χ^ ( 2 ) E^2+⋯) is often the subject of considerable interest<cit.>. The nonlinear electric response is usually weak in nonlinear optics crystals, manifesting as second-order nonlinearities<cit.>. In contrast, its counterpart, the nonlinear response of the magnetic field (M=χ_M^ ( 1 ) H+χ_M^ ( 2 ) H^2+⋯), is relatively underexplored. Some research indicates that strong interaction can easily occur between magnons in ferromagnetic media and microwave photons<cit.>. Recently, Carmiggelt et al. observed a nonlinear four-wave mixing based on ferromagnetic resonance (FMR) in the YIG film<cit.>. Koerner et al. discovered an up to 50th-order nonlinear harmonic signal, which is coming from switching effects in magnetic film, beyond the FMR region in NiFe thin film<cit.>. The presence of higher-order nonlinear magnetic response provides higher degrees of freedom in frequency conversion. We propose a coherent frequency conversion method with much wider bandwidth by strong nonlinear response coming from symmetry breaking in domain walls rather than the magnons scattering in FMR region. We demonstrate this method on a hybrid system integrating solid-state qubits with a spintronic device, taking nitrogen-vacancy (NV) center in diamond<cit.> as an example of a solid-state qubit, and the CoFeB thin film on waveguide as an example of the spintronic device. First, the input microwaves generate corresponding magnons, through the linear magnetic dipole interaction. Then, the strong nonlinear response on the rich texture magnetic film results in the multi-wave mixing of magnons, and the converted magnons couple with NV centers. We measure a wide-band microwave frequency conversion spectrum, covering a wide frequency range spanning from 100MHz to 12 GHz. The range is limited by our instrumentation, having the potential to reach tens of GHz. The spectrum shows that the spintronic microwave converter can achieve a flexible combination of two microwaves. Our experiments and simulations illustrate that the frequency conversion mechanism relies on nonlinearity,χ ^ ( 2,3,⋯ ), which originates from symmetry breaking in magnetic domain walls of the magnetic film. We display that our hybrid system can couple environmental signals with solid-state qubits, realizing wideband microwave sensing under a fixed magnetic field. This application dramatically enhances the quantum sensing bandwidth of solid-state qubits, constituting a major advance toward the precise characterization and miniaturization of microwave quantum sensing applications. Furthermore, we achieve coherent quantum control of the solid-state spins by performing up-conversion. The pumping microwave photons are detuned from the electron spin resonance (ESR) frequency by a few GHz. This process reveals that the converted magnons retain good coherence. It shows that the frequency conversion in the hybrid system can be utilized to couple spin qubit systems. Subsequently, we obtain a competitive conversion efficiency (5.9% for the third-order conversion) by analyzing Rabi frequencies. This solution, not only addresses the challenges in quantum information but also opens up a promising avenue for nonlinear spintronic devices. § RESULT §.§ Hybrid system Our hybrid system integrates NV centers with a 15nm CoFeB thin film deposited on a coplanar waveguide (CPW) in two configurations: nanodiamonds (ND) containing NV centers randomly dispersed on the CPW surface, and a bulk diamond with implanted nitrogen-vacancy centers placed close to CPW. Both coupling schemes yield similar results, illustrating the general applicability of the hybrid system for various solid-state spin systems. The NV centers are situated close to the magnetic film, serving as a sensor of the stray fields generated by the spin waves (magnons). The spin waves in the magnetic thin film are excited by microwaves propagating through the gold waveguide beneath the film. Additionally, the applied static magnetic field is oriented along the direction of microwave propagation, resulting in a perpendicular alignment between the microwave field and the static magnetic field. The configuration of our hybrid system is depicted in figure <ref>a. §.§ Strong nonlinear effects and the accompanying frequency conversion in magnetic films Our first experiment focuses on detecting the high-order harmonic frequencies of microwaves induced by strong nonlinear responses in the CoFeB magnetic thin film, by optically detected magnetic resonance (ODMR)<cit.>. The detection principle is that resonant microwave fields can drive ESR transitions (f_ESR∼ 2.87 GHz @ 0mT) in the nitrogen-vacancy centers, leading to reductions in their photoluminescence (PL) intensity (see figure <ref> b&c). By sweeping the microwave frequencies, resonant microwaves elicit a clear decrease in PL intensity, while non-resonant microwaves do not alter the PL intensity. To analyze the condition facilitating magnetic responses within the ferromagnetic layer, we adjust the static bias field magnitude B_bias and pump microwave frequency f_pump, resulting in the PL intensity being displayed as a function of the static bias field and pump microwave frequency, in figure <ref> b. We initially saturate the magnetization of the magnetic thin film by applying a +10 mT field, then the field is adjusted to the negative direction. Normally, only microwaves at resonance conditions affect the PL intensity, and we should observe a set of slowly splitting peaks starting from f_ESR. However, in this experiment, we find that unlike usual, when the driving microwave's harmonic frequencies (n· f_pump) align with the resonant conditions, a noticeable decrease in PL counts can also be observed. Specifically, it means that whenever we apply a driving microwave with a frequency of f_mw = f_ESR/7 ∼ 410MHz on the waveguide, the NV center can detect a resonant signal with a frequency of f_ESR∼ 2870MHz, corresponding to the 7^th harmonic of the driving microwave. This implies that the magnetic texture in the spintronic device serves as a frequency multiplier, converting a non-resonant microwave signal to a harmonic signal resonant with the NV center. All microwave components utilized in our experiment have undergone meticulous filtering procedures to ensure that the observed results stem solely from the nonlinear effects of the magnetic film, while eliminating any potential nonlinearities originating from the microwave devices. Given the NV center's role as a near-field signal sensor, we have conducted a comprehensive analysis by comparing not only the results obtained from coplanar waveguide (CPW) with and without a coated ferromagnetic (FM) thin film but also the NV centers positioned above the waveguide and in the gap region. These comparative experiments confirm that only the NV centers on CPW coated with an FM thin film can detect multiple harmonic frequencies, and this phenomenon of multiple harmonics originates from the CoFeB magnetic thin film. We observe at least 25^th harmonic frequencies on CoFeB samples, while previous work on soft magnetic NiFe thin films has observed up to 50^th harmonic frequencies<cit.>. These imply that the spin wave harmonics effect appears to be a common feature of various ferromagnetic thin films. The microwave photons emitted from the CPW stimulate dynamic magnetization oscillations in the interfaced ferromagnetic layer. From the perspective of nonlinear optics, the domain walls and the edges/interfaces of the magnetic film represent a symmetry break, resulting in a nonlinear susceptibility and a nonlinear magnetic response. In magnetic systems, the nonlinear magnetic response can be expressed in a universal formulation: 𝐌(t) = χ ^ ( 1 ) 𝐇(t)+ χ ^ ( 2 ) 𝐇^2(t) + χ ^ ( 3 ) 𝐇^3(t)+⋯ These nonlinear magnetic susceptibilities such as χ ^ ( 2 ) and χ ^ ( 3 ) come from the symmetry breaking in domain boundaries. The intensity of the nonlinear signal depends on the value of nonlinear coefficients. We inferred that an increase in domain wall length leads to larger symmetry-breaking regions, enhancing the nonlinear signal. A narrower domain wall will also correspondingly enhance nonlinear response. We verified this hypothesis through micromagnetic simulation (See M4 in supplementary information). The lower panels Fig <ref>c mainly show the spatial distribution of the longitudinal magnetization M_z at 10^th harmonic (2870MHz) under the 287 MHz microwave excitation. Each sub-figure corresponds to a different static structure, where “one-step” is single domain wall, “two-step” is double domain wall, “low density multi-step” is lower density multi-domain, and “high density multi-step” is higher density multi-domain. Although there is a slight spatial non-uniformity in the M_z across each model area, the spatial average M_z of each model, as shown in the upper panel of Figure 2c, clearly indicates a positive correlation between the length of domain walls and the intensity of the harmonic response. We also try to alter the width of the domain wall by adjusting parameters. We observed that a narrower domain wall corresponded to a stronger nonlinear effect (See Fig. S9 in supplementary information). The longer and narrower domain walls lead to bigger nonlinear terms χ ^ ( 2 ), χ ^ ( 3 ), ⋯. This observation aligns with our phenomenological theoretical analysis. Experimental manipulation allows for the control of nonlinearities strength, we observe the magnetic field dependence in the harmonic orders of the microwave during the experiment, see Figure <ref> b. We find that the nonlinear harmonic signals are most pronounced under the magnetic fields from -0.5 to -2.5 mT, revealing that the intensity of the nonlinear effects is related to the magnitude of the bias field. To figure out what's happening in this bias field range, we use the magneto-optic Kerr effect (MOKE) microscopy to map out the magnetic texture of the ferromagnetic thin film, finding that the domain walls are zigzag-shaped and abundant within this region. This result aligns with our simulated results. We give out a qualitative theory, which shows that the nonlinear responses decay with the order slowly (See M5 in Supplementary Information) in regions with abundant domain walls. The basic reason is that the spin waves have a small or no gap along domain walls, therefore the dynamic field can always resonate with domain walls<cit.>. Furthermore, nonlinear harmonics signals were observed in the saturated magnetization region (bias field = 9 mT), albeit with diminished intensity, as shown in Figure <ref> a. At this point, the magnetic film is theoretically fully magnetized by the bias field, and the domain walls disappear accordingly. Our results demonstrate the existence of nonlinear sources beyond domain walls, but further experiments are needed to confirm the specific roles of interfaces and edges, to expand the operational range of nonlinear devices <cit.>. The observed magnetic nonlinear response, drawing parallels with the principles of nonlinear optics, is anticipated to generate the multi-wave mixing phenomena, extending beyond the production of mere harmonics <cit.>. We consider that the microwave field exciting the second-order nonlinearity consists of two different frequency components, which we denote as: 𝐇(t) =H_1 e^-iω_1 t + H_2 e^-iω_2 t+ c.c. The second-order contribution to the nonlinear magnetization is: 𝐌^ ( 2 ) (t) = χ ^ ( 2 ) 𝐇^2(t) =χ ^ ( 2 ) [ H_1^2e^-2iω_1 t + H_2^2e^-2iω_2 t+ 2 H_1 H_2 e^-i ( ω_1 + ω_2 ) t + 2 H_1 H^*_2 e^-i ( ω_1 - ω_2 ) t + c.c. ] +2χ ^ ( 2 ) ( H_1 H_1^* + H_2 H_2^* ) The equation shows that χ ^ ( 2 ) contributes to three-wave mixing, while χ ^ ( 3 ) and other higher-order nonlinearities should contribute to more complex four-wave and multi-wave mixing <cit.>. The second experiment demonstrates our implementation of multi-wave mixing using two different microwave sources through our hybrid system. We keep the bias field at 1.5mT, and two microwave sources, f_1 and f_2, are concurrently applied to the hybrid system. We try to sweep the frequencies of two sources. Only when the sum and difference frequencies of two source frequencies or their harmonic frequencies align with the ESR frequency, a remarkable decrease in PL intensity can be observed. More precisely, the NV centers can detect the mixing signals whenever two microwave signals satisfy the following equations: a · f_1 + b · f_2 = f_± ESR where a and b are integers. In figure <ref> a, we measure a 2-dimension spectrum of the non-linear spin wave response, the indicated notion means (a,b) in equation <ref>. For this system, the two microwaves are interchangeable, hence the measurement points are symmetric about the line f_1=f_2. The measured spectrum demonstrates a rich set of frequency conversion paths, enabling a wide range of up- and down-conversion by flexibly combining harmonic generation and sum/difference frequency generation of different orders. In figure <ref> b, we present the theoretical frequency spectrum of the hybrid system, and some specific experiment data points, demonstrating that our frequency conversion can span a range from 100MHz to 12GHz. The capabilities of our experimental instrumentation constrain the demonstrated frequency range. In principle, a wider bandwidth can be achieved. We even observed 6^th-order mixing (3 · f_1 - 3 · f_2 = f_ESR) between the third harmonic of f_1 and the third harmonic of f_2, which is rare in other nonlinear systems and highlights the advantage of frequency conversion in spintronic systems. §.§ Wideband microwave sensing by frequency conversion Quantum sensing has demonstrated important applications such as nanoscale scanning magnetometers<cit.>, sensing under high pressure <cit.>, and nanoscale nuclear magnetic resonance (NMR)<cit.>. Within the frequency range of 0-10 MHz, full range detection has been achieved through pulsed control methods, eliminating the necessity for tuning ESR frequencies<cit.>. To measure weak alternating current (AC) signals higher than 10MHz, longitudinal spin relaxation time T_1 sensing is typically characterized, as it exhibits greater sensitivity to high-frequency signals. The conventional microwave sensing method involves tuning the bias magnetic field to alter the ESR frequency<cit.>. The signal can be detected only when the ESR frequency aligns with it, therefore the detection bandwidth depends on the tuning range of the bias field. We use a weak microwave to simulate the target signals coming from the environment, showing that our hybrid system can be used in varying the target frequency for T1 relaxometry, without the tunable magnetic field (see Figure. S2 in supplementary information). We eliminate the need for complex externally applied bias magnetic fields through this hybrid system. By using the sum and difference frequencies generator, we try to convert the target signals into microwaves resonating with solid-state qubits, demonstrating a wideband microwave sensing under a fixed magnetic field. We perform the up-conversion microwave sensing in Figure <ref> a. We apply a continuous microwave f_2 = 0.4 GHz to simulate an environment target signal. Then we apply a tunable pump microwave source f_1 to drive the nonlinear response of the magnetic device. With the magnetic sum frequency generator, we can detect the resonant peaks of the pump microwave, and derive the target signal using up-conversion protocol f_1 + f_2 = f_ESR by NV center. Another more noteworthy direction is the microwave sensing conducted by down-conversion. The detection of a high-frequency signal requires not only a strong tunable field but also a high-frequency pump source<cit.>. Due to the multi-wave mixing properties of magnetic devices, we can achieve a variety of conversion protocols to further reduce the requirements for high-frequency signal detection. To detect a f_2 = 10.0 GHz signal, through first-order down-conversion, we can use a scanning microwave source around f_2-f_ESR=f_1= 7.13 GHz to read out the target signal. We can also realize second-order or higher-order down-conversion to compress the requirement of the pump source. Using the second-order down-conversion protocol f_2=2 · f_1∓+f_ESR±, we can now use more common, general-purpose microwave sources to accomplish the same task. It’s noteworthy that this approach also eliminates the need for a tunable 0.25T bias field, which is typically required in traditional detection methods. As our hybrid system can perform strong multi-wave mixing, every target signal f_2 can correspond to a spectrum fingerprint, which is related to the constant F_ESR and resonant peaks f_1± in pump frequency sweep measurement. So far, we have successfully illustrated a methodology that allows spin qubits to interact with other systems or environment signals by frequency conversion. The crux of this approach lies in the utilization of nonlinear magnons mixing for microwave frequency conversion. This, in turn, significantly broadens the detection spectrum of spin qubits, thereby enhancing the bandwidth of quantum sensing. What makes this method particularly intriguing is that the expansion of bandwidth is not reliant on an adjustable external magnetic field, which further enhances its applicability in various environments. The stabilization of the magnetic field ensures consistent and reliable sensing, thereby accurately characterizing the amplitude and frequency attributes of the detected microwave signal. The bandwidth of frequency conversion is contingent on the frequency of the magnons that can be excited in the material, with the upper limit potentially reaching tens of GHz. §.§ Quantum coherent manipulation by frequency conversion To achieve coupling different solid-state qubit systems, we further substantiate that microwave photons, after frequency conversion, retain the same coherence as the source and are capable of executing quantum coherent control or coherently coupling quantum systems. This suggests that the microwave photons, derived from the conversion of nonlinear magnons, exhibit a high degree of coherence. Here we conduct a Rabi oscillation measurement by applying a microwave source at f_ESR/3, shown in figure <ref>a. The Rabi oscillation frequency driven by the harmonic microwave exhibits a linear relationship with the amplitude of the driving microwave, see in figure <ref>b. It indicates that within the power range, the amplitude of the driving microwave should also have a linear relationship with the amplitude of the converted harmonic signal. Our experimental results appear to deviate from our general estimation, as we expected the converted signal of the third-order harmonic to grow with the cube law of the amplitude of the driving microwave, as described by equation <ref>. The current linear growth indicates that the conversion efficiency of our third-order harmonic conversion remains nearly constant. There are two possible reasons for this: first, a significant portion of the energy may have been converted into thermal magnons and dissipated; second, the system may be approaching a saturated conversion efficiency which is similar with the saturated effect in nonlinear optics. However, further analysis is required to determine the factors limiting this upper bound. We further compare the Rabi oscillations driven by microwaves at f_ESR, f_ESR/2 and f_ESR/3 with identical power conditions, as depicted in <ref>c. We can obtain the distribution of harmonic microwaves produced by the magnetic thin film after frequency conversion under a 20dBm microwave pump. While the remaining energy of the microwave at the pump frequency is regarded as 1, the energy ratio of the converted second-order harmonic microwave is 9.1%, and that of the third-order harmonic microwave is 5.9%. These results suggest that the frequency conversion efficiency gradually decreases with the increase in harmonic order, which is consistent with our simulation results. This Rabi frequency can precisely reflect the amplitude of the microwave field, providing a quantitative analysis tool for researching the nonlinear spin wave generation effect. The coherence is not only present in harmonic signals but also applicable to frequency mixing signals. By utilizing microwave sources with different frequencies, the frequency-mixing signals can also drive Rabi oscillations (see Fig. S1). The realization of quantum coherent manipulation through frequency conversion demonstrates that the converted microwave exhibits strong phase preservation. Beyond enabling ultra-wideband signal detection, this hybrid system presents a viable pathway to enacting quantum coherent manipulations at non-resonant microwave frequencies, demonstrating the potential of coupling different solid-state qubit systems. For example, two long-lived quantum storage systems with different resonant frequencies can be connected to the hybrid system, which is critical for quantum network applications. § DISCUSSION The nonlinear effects exhibited in the magnetic films we study are manifested in the strong high-order nonlinear susceptibility (χ ^ ( 2 ), χ ^ ( 3 ), ⋯) in equation <ref>. These nonlinear coefficients are related to the symmetry of magnetism, such as the second-order nonlinearity resulting from the time-reversal symmetry breaking<cit.>. Compared to nonlinear crystals, our system can be very compact and integrated into various systems. This near-field frequency conversion also allows us to avoid the issue of phase matching, as the scale of our system is much smaller than the microwave wavelength. We realize the coupling of spin qubits with the classical systems or environment, which is regarded as quantum sensing. To optimize the detection sensitivity of the solid-state quantum spin sensors, we can utilize weak microwave signal sensing protocols developed in the NV center, such as heterodyne detection scheme<cit.>, to improve the sensitivity. We can even convert the target signal to the range of 0-10MHz, and use more precise quantum sensing tools such as Ramsey interference<cit.> and dynamic decoupling<cit.> to optimize measurement sensitivity. Furthermore, the microwave photons after frequency conversion exhibit surprisingly good coherence. This is sufficient for us to achieve quantum coherent control of various solid-state qubits using a non-resonant microwave source, and even realize the on-chip coupling between different quantum systems. For example, through our frequency conversion, the microwave photons emitted from the silicon vacancy(V_Si) in Silicon Carbide<cit.> can be transformed into microwave photons resonant with NV centers in diamond, thereby realizing the coupling between spin bits with different resonant frequencies. Due to the fact that the nonlinear response is not solely coming from the symmetry breaking in domain walls, we can effectively engineer the nonlinear effects by meticulously modulating their magnetic texture and physical shapes to alter the nonlinear response in edges or interfaces of films<cit.>. For instance, based on simulation results (see Figure S8), we can modify the nonlinearity by changing the proportion of the material's magnetic parameters, such as the exchange stiffness coefficient A_ex and uniaxial magnetic anisotropy K_u. Furthermore, fabricating the devices with serrated edges and the utilization of thinner magnetic films, such as atomically thin van der Waals (vdW) magnetic materials <cit.>, are expected to significantly enhance the nonlinear response of this hybrid system. Moreover, benefiting from the low power consumption and facile integration of spintronic devices, this hybrid system holds promise for integration with bio-sensing and cryogenic systems. These robust frequency conversions will significantly stimulate research on nonlinear spintronic devices, such as magnon IQ mixers and magnon frequency multipliers, opening up new avenues for the development of alternatives to traditional semiconductor devices. § METHODS The Au(100nm)/CoFeB(15nm)/TaO_x(2nm) microscale coplanar waveguide (CPW), with 50 μ m width microstrip and 20 μ m width gap, was fabricated on a SiO_2/Si substrate utilizing magnetron sputtering technology, with TaO_x serving as a capping layer to inhibit oxidation. The CPW is connected to the circuit board through wire bonding, allowing for microwave propagation within the gold layer. §.§ Micromagnetic simulation The simulations were performed using the Mumax3 software. The magnetic parameters adopted in our model align with the physical properties of realistic materials. During each simulation, the magnetization was stimulated by an external radio frequency (rf) field with a peak amplitude of 800 μ T and a frequency of 287 MHz, oriented in the y-direction. Concurrently, a static bias field of 1.0 mT was applied in the x-direction. This configuration of the bias and rf fields was specifically chosen to replicate the orientation used in the actual experimental setup. To analyze the simulation results, a Fast Fourier Transform (FFT) was employed to extract the magnetization component M_z at 2870 MHz from the simulated time evolution magnetization data. We then visualized the amplitude distribution of this component across the simulation area, as illustrated in Figure 2c. Further details on the simulation parameters and settings can be found in the supplementary information. §.§ Experimental setup In our study, we use NV centers in the bulk diamond and nanodiamonds as sensors. The diamond is implanted with 9.8 keV ^15 N ions at a dose of 2· 10^12 N/cm^2, resulting in the NV centers' concentration should be around 200ppm with a depth of around 10nm. The maximum distance between the NV centers and CoFeB is around 5μ m, which is estimated from confocal imaging, constrained by particulate contamination (e.g. dust particles) at the interface between the diamond and CoFeB surfaces. Figure <ref>, <ref> and <ref> are measured by NV centers in bulk diamond, and figure <ref> is measured by NV centers in nanodiamonds. The detection of the stray field at ESR frequency was achieved by measuring the spin-dependent PL intensity under green laser excitation (520 nm) as depicted in Fig. <ref>a. The PL signal was collected by a confocal microscopy system and detected by an avalanche photodetector (Excelitas SPCM-AQRH-10-FC). Due to the in-plane magnetic anisotropy (IMA) of magnetic film, the bias field is applied parallel to the thin film, and the current of the Helmholtz coil is controlled by a sourcemeter (Keithley 2400). § DATA AVAILABILITY The data that support the findings of this work are provided within the main text and Supplementary Information. Data related to the work can also be made available from the corresponding author upon request. Preliminary results from this study were reported in the conference proceedings of the 2023 IEEE International Electron Devices Meeting (IEDM)<cit.>. § ACKNOWLEDGEMENTS J.Wu, M. Leung, W. Leung, K. Ho, S. Yang were supported by RGC-AOE (AoE/P-701/20), and RGC-GRF (Grant No.16305422). J. Liu, Z. Ren, Q. Shao were supported by National Key R&D Program of China (Grants No.2021YFA1401500), RGC-GRF (Grant No.16303322), and Research Fund of Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology (Grant No. 2020B1212030010). § AUTHOR CONTRIBUTIONS Q. S. and S. Y. conceived the experiment. J. L., Z. R., and J. W. designed the sample. J. L. and Z. R. fabricated the sample. J. W. and J. L. built the measurement setup and performed the measurements with the help of M. L., W. L., and K. H.. J. W. and S. Y. analyzed the experiment data. X. W., S. Y., and Q. S. developed the theoretical framework. J. L. performed the simulation with the discussion with Z. R. and J. W.. J. W. and S. Y. wrote the article with help from all co-authors. § COMPETING INTERESTS The authors declare no competing interest.
http://arxiv.org/abs/2407.02900v1
20240703082027
Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization
[ "Sebastian Doerrich", "Francesco Di Salvo", "Christian Ledig" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Self-supervised Vision Transformer are Scalable Generative Models S. Doerrich et al. xAILab Bamberg, University of Bamberg, Germany sebastian.doerrich@uni-bamberg.de Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization Sebastian Doerrich Francesco Di Salvo Christian Ledig =========================================================================================== § ABSTRACT Despite notable advancements, the integration of deep learning (DL) techniques into impactful clinical applications, particularly in the realm of digital histopathology, has been hindered by challenges associated with achieving robust generalization across diverse imaging domains and characteristics. Traditional mitigation strategies in this field such as data augmentation and stain color normalization have proven insufficient in addressing this limitation, necessitating the exploration of alternative methodologies. To this end, we propose a novel generative method for domain generalization in histopathology images. Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches and seamlessly infuse them into the original images, thereby creating novel, synthetic images with diverse attributes. By enriching the dataset with such synthesized images, we aim to enhance its holistic nature, facilitating improved generalization of DL models to unseen domains. Extensive experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach, outperforming the state of the art substantially, on the Camelyon17-wilds challenge dataset (+2%) and on a second epithelium-stroma dataset (+26%). Furthermore, we emphasize our method's ability to readily scale with increasingly available unlabeled data samples and more complex, higher parametric architectures. Source code is available at https://github.com/sdoerrich97/vits-are-generative-modelsgithub.com/sdoerrich97/vits-are-generative-models . § INTRODUCTION Deep learning (DL) has had a significant impact on a broad range of domains ranging from image classification to natural language processing <cit.>. Nevertheless, its incorporation into routinely used medical image analysis has progressed comparatively slow <cit.>, mainly due to difficulties in achieving robust generalization across diverse imaging domains. This challenge is particularly pronounced in digital histopathology, where variations in coloring agents and staining protocols for histological specimens exacerbate domain disparity <cit.>. Traditional approaches to address these generalizability challenges in digital histopathology typically involve data augmentation or stain color normalization <cit.>. Data augmentation techniques manipulate aspects of color <cit.>, apply stain-specific channel-wise augmentation <cit.>, or incorporate stain colors of unseen domains into the training data <cit.>. Alternatively, stain color normalization aligns images' color patterns using target domain information <cit.>. However, these methods often require access to target samples during training or struggle with adapting to new domains and unseen stain colors. To overcome these limitations, Lafarge et al. <cit.> investigate the use of Domain Adversarial Neural Networks (DANNs) to enhance cross-domain performance. Conversely, Nguyen et al. <cit.> propose ContriMix, which aims to improve domain generalization by augmenting the diversity of the source domain with synthetic images. This is achieved by initially separating biological content from technical variations and subsequently combining them to form new anatomy-characteristic combinations. However, ContriMix's dependence on convolutional encoders restricts the diversity of its synthetic images, as it allows for the extraction of only a single characteristic tensor per image. In this work, we focus on those limitations and present a novel generative domain generalization (DG) method for histopathology images. Employing a self-supervised Vision Transformer (ViT), we generate synthetic images with diverse combinations of anatomy and image characteristics, enriching the holistic nature of the dataset without requiring any domain information. This allows DL models trained on the extended dataset to adapt to unseen domains more effectively. To prove this, we evaluate our method in extensive experiments against the current state of the art on two distinct benchmark datasets for domain generalization in histopathology. Our main contributions are: * We present a novel self-supervised generative domain generalization method for histopathology. * We generate synthetic images with unseen combinations of anatomy and image characteristics. * We extensively evaluate our method on two histopathology benchmark datasets and outperform the state of the art by a large margin. * We assess our method's ability to scale effectively with growing availability of unlabeled data samples and the adoption of deeper architectures. § METHOD Our method is a self-supervised generative approach that employs feature orthogonalization to generate synthetic images. Using a single ViT encoder (E), we encode an image patch-wise and split the resulting embeddings, with one half preserving anatomy and the other half storing characteristic features for each patch. These feature vectors are then mixed across different input images and fed into an image synthesizer (IS) to create synthetic images representing new anatomy-characteristic pairs. See  <ref> for an illustration of this process. §.§ Feature Orthogonalization and Image Synthesis Taking inspiration from ViT principles <cit.>, we first partition images x_i with x_i ∈R^C × H × W, where C, H, and W are the number of channels, height, and width of the image, respectively, into non-overlapping patches. This results in x̃_i ∈R^P × C × PS × PS, where P denotes the number of patches and PS the patch size. These patches are processed by the encoder E to extract feature embeddings z_i for each image. Let z_i = E(x̃_i) ∈R^P × L, where L denotes the encoder's latent dimension, we extract the anatomical (z_i^a∈R^P × L / 2) and characteristic (z_i^c∈R^P × L / 2) feature vectors by splitting z_i along L. To reconstruct the original images x̂_i, the image synthesizer IS reshapes the feature vectors into matrices Z_i^a∈R^P × C × PS × V and Z_i^c∈R^P × C × V × PS, where V is the hidden dimension, before applying the dot-product of both feature matrices along V to restore x̂_i. x̂_i = IS(z_i^a, z_i^c) = Z_i^a· Z_i^c, with x̂_i ∈R^P × C × PS × PS⟷R^C × H × W Conversely, to generate synthetic images s_i with diverse anatomy-characteristics combinations, we combine the anatomical feature embeddings z_i^a of each sample x_i in batch b with M characteristic feature embeddings. These are each extracted from a single patch of another sample x_m within the same batch (m ∈ 1, …, M). This patch, and thereby its corresponding characteristic embedding z_m,p^c are chosen uniformly at random from each sample x_m. Note that we do not use the entire z_m^c since using the characteristics of a single patch yields substantially more diverse synthetic images. These combinations (z_i^a, z_m,p^c) are then passed through IS to create the synthetic images s_i, preserving the original anatomy but with severely altered characteristics. This process enables the extraction of fine-grained characteristics, resulting in a diverse range of synthetic images s_i. §.§ Feature Consistency and Self-Reconstruction To guide the feature orthogonalization and synthetic image generation, we employ three distinct mean squared error (MSE) loss terms, namely anatomical consistency L_C^a, characteristic consistency L_C^c and self-reconstruction L_R. The anatomical consistency L_C^a for batch b with N training samples and M number of anatomy-characteristic mixes: L_C^a = 1/NM∑_i = 1^N∑_m = 1^M|| z_i^a - z_s^a||_2^2 with z_i^a = E(x_i)^P × [1 : L/2] and z_s^a = E(IS(z_i^a, z_m, p^c))^P × [1 : L/2] where z_m, p^c being the characteristic embedding of a randomly chosen patch p of sample x_m, promotes consistency between the anatomy extracted from the original images x_i and the corresponding synthetic images s_i. In addition, the characteristic consistency L_C^c for batch b with N training samples and M number of anatomy-characteristic mixes: L_C^c = 1/NMP∑_i = 1^N∑_m = 1^M∑_q = 1^P|| z_m,p^c - z_s,q^c||_2^2 with z_s,q^c = E(IS(z_i^a, z_m, p^c)) at patch q ∈ P and z_s,q^c∈R^1 × L / 2 aligns the characteristics of the synthetic images s_i with the characteristic z_m, p^c used to create these synthetic images. Lastly, the self-reconstruction loss L_R: L_R = 1/N|| x_i - IS(z_i^a, z_i^c) ||_2^2 aims to ensure that the self-reconstructed images closely resemble the original ones. Thereby, the combined loss across a set of mini-batches with b ∈ 1, …, B can be written as: L = 1/B∑_b = 1^Bλ_a L_C^a + λ_c L_C^c + λ_r L_R with λ_a, λ_c, λ_r being weights to adjust the influence of each loss during training. §.§ Training The encoder is trained independently for each dataset adhering to the objective described above. This fully self-supervised approach allows us to incorporate labeled or unlabeled samples for the anatomical area of interest and facilitates dynamic transfer to additional tasks without retraining. For the ViT encoder E, we opt for the ViT-B/16 backbone, which operates on 224 × 224 pixel images, splitting them into 16 × 16 pixel patches and encoding each patch into a 768-dimensional vector. Following <cit.>, we use 4 mixes (number of combinations M of anatomy and characteristics to get synthetic images) per batch. We set λ_a = λ_c = λ_r = 1 and train the encoder for 50 epochs with a batch size of 64, utilizing the AdamW optimizer <cit.> with a learning rate of 0.001, and a cosine annealing learning rate scheduler <cit.> with a single cycle. § EXPERIMENTS AND RESULTS We assess the domain generalization ability of our method on two histopathology datasets. The first is the Camelyon17-wilds challenge dataset <cit.>, focusing on tumor identification across various hospitals. It comprises 96 × 96 image patches from lymph node whole-slide images, with labels indicating tumor presence in the central 32 × 32 region. We use the same training (302,436 samples), validation (34,904), and test (85,054) splits as the original publication <cit.>. For the second dataset, we aggregate three public histopathology datasets: NKI <cit.>, VGH <cit.>, and IHC <cit.>, focusing on epithelium-stroma classification. The NKI (8,337 samples) and VGH (5,920) datasets comprise H&E stained breast cancer tissue images, while the IHC dataset (1,376) consists of IHC-stained colorectal cancer tissue images. Following <cit.>, we alternate between NKI and VGH as the train/validation set, but maintain IHC as the fixed test set due to its distinct coloration. This allows us to mimic a similar generalization challenge as presented in Camelyon17-wilds, where both the validation and test set comprise out-of-distribution (OOD) samples. In order to fully utilize our ViT encoder's abilities, both benchmark datasets are standardized to 224 × 224 images using bicubic interpolation. Examples for each dataset are illustrated in  <ref>. §.§ Qualitative Evaluation We qualitatively evaluate our method by training it on the Camelyon17-wilds dataset and assessing the image quality of the image synthesizer's reconstructions (no mixing). For the training set, we achieve an average Peak Signal-to-Noise Ratio (PSNR) of 46, for the OOD validation set of 46 and for the OOD test set of 40. These results demonstrate the model's capability to successfully encode image information while retaining a holistic understanding in order to generalize to unseen domains.  <ref> illustrates this qualitatively for 5 distinct samples from each hospital and dataset split. We also assess the image quality of synthetic images, which exhibit the same anatomy but varied characteristics, generated by our image synthesizer.  <ref> demonstrates this process, utilizing randomly extracted patch characteristics for each row. Although our method's patch-wise image reconstruction may produce slight grid artifacts, the synthetic images accurately preserve the original anatomy while displaying uniformly the applied characteristics from the extracted patch. This approach facilitates the generation of a diverse array of samples by altering colorization while maintaining diagnostically relevant anatomy. §.§ Disease Classification To evaluate our method's suitability for improving domain generalization, we employ our stand-alone encoder to generate additional synthetic images with mixed anatomy and characteristics, augmenting the training set diversity on the fly. These synthetic images, alongside the originals, are afterward fed into a subsequent classifier allowing it to learn from a more diverse set of samples, thereby generalizing better to unseen images. For the classifier, we use the same DenseNet-121 architecture <cit.> used by the baseline methods in WILDS <cit.>. We evaluate our method on the class-balanced Camelyon17-wilds validation and test sets against the top-performing methods from the WILDS leaderboard[], which utilize the same classifier. The results shown in  <ref> reveal our method's superior accuracy on both sets, setting a new state-of-the-art standard. We further evaluate our method for the binary classification task of the adapted epithelium-stroma dataset. For this, we train it once on NKI and evaluate it for VGH (val) and IHC (test), as well as train it on VGH and evaluate it for NKI (val) and IHC (test), respectively. We compare the performance against the three domain adaptation methods referenced in <cit.>. The consistent performance of our method across these evaluations, as presented in  <ref>, confirms its strong generalizability potential, clearly outperforming the state of the art. §.§ Scalability Potential Finally, we investigate the scalability potential of our method to enhance its reconstruction and image synthesis capabilities. First, we exploit the label-free nature of our encoder (E), enabling the inclusion of unlabeled samples alongside labeled ones during training. This approach allows E to learn from a larger more diverse dataset. To evaluate this, we augment our training data with an additional 302,436 (same amount as labeled training samples) randomly selected samples from the 1,799,247 unlabeled samples available in the Camelyon17-wilds dataset <cit.>. Through this augmentation, our encoder achieves improved reconstruction performance compared to the base model: 49 versus 46 for the training set, 49 versus 46 for the validation set, and 44 versus 40 for the test set. Furthermore, leveraging a Vision Transformer (ViT) backbone allows us to readily increase model capacity by replacing the ViT-B/16 backbone (86M parameters) with the deeper and more sophisticated ViT-L/16 (322M parameters). Notably, we extend the embedding dimension from 768 to 1,056 to accommodate the requirements of our image synthesizer's matrix multiplication. Training the adapted ViT-L/16 backbone for 10 epochs on Camelyon17-wilds already yields enhanced results, with a reconstruction performance of 49 versus 46 for the training set, 49 versus 46 for the validation set, and 42 versus 40 for the test set. These findings demonstrate that both scaling approaches result in superior performance compared to the base method, underscoring the method's scalability potential in terms of utilizing unlabeled samples and adopting more sophisticated network architectures. § DISCUSSION AND CONCLUSION In this work, we introduce a novel self-supervised, generative method for domain generalization. By employing the power of a Vision Transformer encoder, we successfully generate synthetic images featuring diverse combinations of anatomy and image characteristics in a self-supervised fashion. This approach enriches the representativeness of the dataset without necessitating any domain-specific information, thereby enabling more effective adaptation to previously unseen domains. Through quantitative experimentation on two distinct histopathology datasets, we demonstrate the efficacy of our method. Our qualitative assessment emphasizes the model's proficiency in encoding image data and its capacity to generalize across domains. Moreover, the synthetic images generated by our method faithfully preserve original anatomical details while augmenting dataset diversity. Furthermore, by enabling the utilization of unlabeled samples or the adoption of more sophisticated ViT backbone architectures, our method demonstrates scalability potential, exhibiting improved reconstruction performance and adaptability. We believe that our method's flexibility should allow its application across various modalities for addressing generalization challenges not only in histopathology but also in other applications. §.§.§ This study was funded through the Hightech Agenda Bayern (HTA) of the Free State of Bavaria, Germany. splncs04
http://arxiv.org/abs/2407.02026v1
20240702074922
Programming higher-order interactions of Rydberg atoms
[ "Andrew Byun", "Seokho Jeong", "Jaewook Ahn" ]
quant-ph
[ "quant-ph", "physics.atom-ph" ]
jwahn@kaist.ac.kr Department of Physics, KAIST, Daejeon 34141, Korea § ABSTRACT Higher-order interactions in spin-based Hamiltonians are crucial in addressing numerous fundamentally significant physical problems. In this work, Rydberg-atom graph gadgets are introduced to effectively program K-th order interactions within a Rydberg atom system. This approach facilitates the determination of the ground states of an Ising-type Hamiltonian, encoded to solve higher-order unconstrained optimization problems. A favorable scaling behavior, O(N^K), is expected in terms of the number of atoms required for N-vertex hypergraph optimization problems. Programming higher-order interactions of Rydberg atoms Andrew Byun, Seokho Jeong, and Jaewook Ahn ====================================================== § INTRODUCTION Higher-order interactions in spin-based Hamiltonians play an important role in many fundamental physics problems. Exotic quantum phenomena, such as the Efimov trimer <cit.>, fractional quantum Hall states <cit.>, and topologically ordered states <cit.> are attributed to non-binary spin interactions. For example, adding a three-body interaction term to the Hubbard model can lead to exotic phases with unique filling factors <cit.>. Moreover, higher-order interactions are also essential for molecular interactions involving more than four-body interactions due to their electronic structures <cit.> and for various high-energy physics models <cit.>. In quantum information science, creating multi-qubit entangled states often requires many-body interactions beyond simple two-body interactions <cit.>. Currently, topological phases are gaining attention for their potential in quantum error correction <cit.>, necessitating the incorporation of higher-order interactions in artificial quantum matter as well as in quantum information and computation <cit.>. In the Rydberg atom system <cit.>, two-body correlations naturally arise from the Rydberg blockade effect <cit.>. However, implementing controllable many-body correlations presents experimental challenges, requiring complex energy level structures and precise electromagnetic field drivings <cit.>. Previous research has developed non-local two-body interactions using Rydberg quantum wires <cit.>, which incorporate an additional chain of atoms. By integrating Rydberg quantum wires with a three-dimensional (3D) configuration of atoms <cit.>, all-to-all interactions between arbitrary pairs of atoms have been achieved <cit.>. Building upon the concept of the Rydberg quantum wire, which corresponds to a linear qubit graph, we aim to demonstrate the implementation of higher-order interactions. We introduce the design of new atomic qubit graphs representing the hyperedge of a hypergraph, thereby facilitating the representation of K-body correlations. Just as a system incorporating Rydberg quantum wires forms a Rydberg atom graph, representing both unweighted <cit.> and weighted graph structures <cit.>, a system incorporating Rydberg hyperedges could represent a hypergraph. The generation of hypergraphs are a scientific and technological challenge. Even for classical Ising models, conventional computational methods often prove inefficient <cit.>. When higher-order interactions are involved, optimizing hypergraphs—which represent hyperedges corresponding to these interactions in the spin system—requires solving numerous NP (nondeterministic polynomial)-hard problems, such as the Max-K SAT (satisfiability) problem for K ≥ 3. Consequently, this method of generating higher-order interactions within the Rydberg atom graph could provide a viable solution for addressing both classical and quantum problems. As a conceptual overview of this paper, Fig. <ref> illustrates an example of the Rydberg-atom hypergraph representation of higher-order unconstrained binary optimization (HUBO). In the given example, the hypergraph G_H(V, E) consists of four vertices, V = {x_1, x_2, x_3, x_4}, and three edges, E = {(x_2, x_4), (x_3, x_4); (x_1, x_2, x_3)}, where (x_1, x_2, x_3) is an order-three (K=3) hyperedge, as shown in Fig. <ref>(a). This hypergraph G_H represents the hypergraph optimization problem with the cost function f = -x_1-x_3-x_2x_4 + x_3x_4 + x_1x_2x_3, where x_1, x_2, x_3, x_4 are binary variables. As to be detailed in subsequent sections, the hypergraph G_H is programmable into a Rydberg atom graph using a proper set of auxiliary atoms, referred to as Rydberg hyperedges, as shown in Fig. <ref>(b), where the triangular subgraph between the atoms representing x_1, x_2, and x_3 corresponds to the hyperedge (x_1, x_2, x_3). The HUBO solution 𝐱 = (1, 0, 1, 0) is obtainable through a quantum adiabatic process that evolves the atom system to its many-body ground state, as depicted in Fig. <ref>(c). In the rest of the paper, the new method of using Rydberg atoms to implement HUBO problems is introduced in Sec. <ref>. This Rydberg HUBO implementation develops two types of hyperedges: the positive-weight hyperedge and the positive-weight hyperedge, both based on the properties of Rydberg superatoms<cit.>. In Sec. <ref>, applications of the Rydberg HUBO implementations are considered, including quantum simulations and quantum computing with high-order interactions. In Sec. <ref>, the scaling properties of the Rydberg-atom HUBO implementation are analyzed, showing that the number of atoms required is O(N^K), where N is the number of vertices in the hypergraph and K is the maximum order of the interaction. The conclusion is presented in Sec. <ref>. § HIGHER-ORDER ISING SPIN INTERACTION  We consider an extended Ising model which incorporates higher-order interactions, also known as the p-spin model <cit.>, defined as follows: Ĥ=∑_j J^(1)_j n̂_j+∑_j<k J^(2)_jkn̂_j n̂_k +∑_j<k<l J^(3)_jkln̂_j n̂_k n̂_l +⋯, where n̂=|1><1| is the number operator, taking a value of 0 or 1 for spin basis |0⟩ or |1⟩, respectively, and J^(j) is interaction strengths where j-th spins involved. An Ising spin system with higher-order interactions can be naturally mapped to a hypergraph G_H=(V,E={E^(2), E^(3),⋯}), where spins correspond to vertices in V and K-th order interactions are represented by hyperedges E^(K). Thus, the Hamiltonian Ĥ can be expressed as: Ĥ = ∑_j∈ V J^(1)_j n̂_j+∑_(j,k)∈ E^(2) J^(2)_jkn̂_j n̂_k +∑_(j,k,l)∈ E^(3) J^(3)_jkln̂_j n̂_k n̂_l +⋯. We aim to implement Ĥ in Eq. (<ref>) using a new kind of quantum wires of Rydberg atoms that effectively aggregate K-body interactions of Rydberg atoms. In the qubit system of Rydberg atoms, where the ground and Rydberg states are respectively represented by |0> and |1>, the Hamiltonian governing the dynamics of a Rydberg atom graph is given by (in units of ħ=1): Ĥ_ Ryd = Ω/2∑_j σ̂_j^x -Δ∑_j n̂_j + ∑_(j,k) U n̂_j n̂_k, where Ω and Δ denote the Rabi frequency and detuning of the Rydberg-atom excitation process, and the Pauli operator σ̂^x=|0><1|+|1><0| acts as a bit flip operator. In Hamiltonian Ĥ_ Ryd, excitation to the Rydberg state of the single atom incurs an energy penalty of -Δ. When Ω→ 0 and 0 < Δ < U, the Hamiltonian of the Rydberg atom graph becomes equivalent to the cost function of the maximum independent set (MIS) problem, which aims to maximize the occupation number (n=1) under the constraint of Rydberg blockade, given by n_jn_k=0 for (j,k)∈ E. <cit.> The first two terms (K=1 and K=2) of Ĥ in Eq. (<ref>) are effectively representable in the quadratic unconstrained binary optimization (QUBO) form as: Ĥ_ QUBO g= ∑_j ∈ V J^(1)_j n̂_j + ∑_(j, k) ∈ E J^(2)_jkn̂_j n̂_k, where the symbol g= denotes ground state equivalence under the MIS conditions, Ω→ 0 and 0 < Δ < U. The coefficients J^(1)_j and J^(2)_jk are encodable with QUBO building blocks, including the auxiliary atom set and Rydberg quantum wire <cit.>. In the QUBO representation, there are two kinds of quantum wires: “even-atom quantum wire” and “odd-atom quantum wire”, which represent the cost function, Ĥ^ even_jk/Δ g= n̂_j n̂_k Ĥ^ odd_jk/Δ g= -n̂_j n̂_k + n̂_j + n̂_k The higher-order (K>2) terms of Ĥ in Eq. (<ref>) can be viewed as hyperedges that correspond to the aggregate units of their elements, in the sense that n_j n_k n_l ⋯ = 1 if and only if n_j = n_k = n_l = ⋯ = 1. These higher-order terms are encodable by introducing a new kind of quantum wires for hyperedges, as to be detailed below, in such a way that a Rydberg atom graph can represent the K-th order hyperedge, which corresponds to the K-th order term in Ĥ, i.e., Ĥ^(K) g= ∑_(j, k, l, ⋯) ∈ E^(K) J_jkl ⋯n̂_j n̂_k n̂_l ⋯. §.§ Higher-order unconstrained binary optimization (HUBO) The HUBO problem is the extended version of the QUBO problem. It includes higher-order terms in addition to the linear and quadratic terms in QUBO, targeting to obtain the solution x = (x_1, x_2, ⋯, x_N) ∈{0,1}^N that minimizes the cost function f( x) defined as: f( x) = ∑_j J^(1)_j x_j + ∑_j<k J^(2)_jk x_j x_k + ∑_j<k<l J^(3)_jkl x_j x_k x_l + ⋯, where J^(K)_jkl⋯ is a real-valued K-th order coefficient. Our approach to the implementation of K-th order interaction is to extend the previous QUBO implementation of Rydberg atom graphs <cit.>. The Rydberg QUBO implementation is illustrated in Figs. <ref>(a,b). The two kinds of quantum wires, the “even-atom" quantum wire and the “odd-atom" quantum wire, are used to encode positive and negative quadratic terms, respectively. An even-atom quantum wire configuration is shown in Fig. <ref>(a). This configuration connects two vertices representing variables x_1 and x_2, where x_1(2)∈{0, 1}, with an atom chain consisting of two atoms W_1 and W_2. In this configuration, excitation occurs in either W_1 or W_2 under the MIS condition, resulting in an additional energy of -Δ when x_1 x_2 = 0. Conversely, it implies an effective energy of +Δ when x_1 x_2 = 1. Therefore, the even-atom quantum wire has an effective energy term Ĥ^ even_12/Δg=n̂_1n̂_2. Likewise, Fig. <ref>(b) depicts the simplest odd-atom quantum wire connecting x_1 and x_2, with a single atom W_1. In this case, one excitation occurs in W_1 under the MIS condition, resulting in an effective energy of +Δ when -x_1 x_2 + x_1 + x_2 = 1. Therefore, the odd-atom quantum wire introduces an effective energy term Ĥ^ odd_12/Δg= -n̂_1n̂_2+n̂_1+n̂_2. For the implementation of higher-order terms, in the subsequent subsections, we introduce two types of hyperedges, the “positive-weight hyperedge" and the “negative-weight hyperedge," which correspond to the positive and negative higher-order terms, respectively. §.§ Positive-weight hyperedge A K-th order positive hyperedge is a “positive”-weighted aggregation of K vertices, meaning J_123⋯ > 0 for (x_1,x_2,x_3, ⋯) ∈ E^(K). Similar to the even-atom quantum wire in QUBO, making a subgraph which satisfies Ĥ^ pos_123⋯/Δg=∏_j=1^Kn̂_j, where the set of auxiliary atoms act as a positive-weight hyperedge. This condition can be generated by the K-atom Rydberg superatom <cit.>, a cluster of atoms that share the Rydberg blockade regime. By the character of the Rydberg blockade, a Rydberg superatom only permits single-atom excitation under the MIS condition. Figure <ref>(c) showcases a positive-weight hyperedge with the maximum order of K=3. The K=3-atom Rydberg superatom, forming a triangle with atoms W_1, W_2, and W_3, which is connected to the three vertices x_1, x_2, and x_3, acting as a hyperedge (x_1,x_2,x_3) with a positive energy contribution. Under the MIS condition, only a single excitation in the K=3 Rydberg superatom is permitted, adding an energy penalty of -Δ, where x_1x_2x_3=0. Conversely, no excitation occurs in the Rydberg superatom when x_1x_2x_3=1, thereby assigninig a positive energy of +Δ to the spin configuration corresponding to x_1x_2x_3=1. Thus, the energy function incorporating the additional cost from the Rydberg superatom is represented as Ĥ^ pos, (3)_123 /Δ = n̂_1 n̂_2 n̂_3, following the form in Eq. (<ref>). Similarly, in Fig. <ref>(d), the hyperedge with the maximum order of K=4 is illustrated. The K=4 Rydberg superatom in Fig. <ref>(d) generates Ĥ^ pos, (4)_1234 /Δ = n̂_1 n̂_2 n̂_3 n̂_4 term. The many-body ground states of the connected Rydberg superatom under the MIS condition are listed in Table <ref>. It is noted that the even-atom quantum wire, one of the QUBO building blocks, also follows the positive-weight hyperedge implementation. QUBO is the case containing K = 2 order terms as the maximum order term. The K = 2-atom Rydberg superatom is a dimer, one of the simplest even-atom chains, as shown in Fig. <ref>(a). The energy corresponding to the even-atom quantum wire in Eq. (<ref>) also satisfies Eq. (<ref>), i.e., Ĥ^ even_jk=Ĥ^ pos, (K=2)_jk. §.§ Negative-weight hyperedge A K-th order negative-weight hyperedge is a “negative"-weighted aggregation of K vertices, meaning J_123⋯ < 0 for (x_1, x_2, x_3, ⋯) ∈ E^(K). To achieve this, the term -∏_j=1^Kn̂_j should be implemented. Consequently, the effective Hamiltonian of the new quantum wire must incorporate -∏_j=1^Kn̂_j. The (K-1)-atom Rydberg superatom, which is connected with K vertices, possesses such an effective Hamiltonian. So, we connect all elements of the Rydberg superatom, from W_1 to W_K-1, with vertices x_1 through x_K-1, and also connect the final vertex x_K to W_K-1, which is shared with the x_K-1-th vertex, as in Figs. <ref>(e,f). Then, under the MIS condition, to prevent excitation of the atoms in the Rydberg superatom, vertices from x_1 to x_K-2 must be excited, and vertices K-1 and K must satisfy -x_K-1x_K + x_K-1 + x_K = 1. If this condition is not met, there will be a single-atom excitation in the Rydberg superatom, effectively contributing an energy of -Δ. Thus, when the conditions x_1x_2⋯ x_K-2=1 and -x_K-1x_K + x_K-1 + x_K = 1 are met, specifically, for (x_K-1, x_K) = (0,1), (x_K-1, x_K) = (1,0), and (x_K-1, x_K) = (1,1), the Rydberg superatom has an effective energy of +Δ. The corresponding Hamiltonian is given by Ĥ^ neg_12⋯(K-2);(K-1)K/Δg=[-n̂_K-1n̂_K + n̂_K-1 + n̂_K] ∏_j=1^K-2n̂_j, which contains the negative K-th order term and two additional positive (K-1)-th order terms. Figure <ref>(e) illustrates a negative-weight hyperedge with the maximum order of K=3. The K-1=2-atom Rydberg superatom, forming a dimer with atoms W_1 and W_2 is connected to the vertices x_1, x_2, and x_3, acting as a hyperedge (x_1, x_2, x_3) with a negative energy contribution. Under the MIS condition, only a single excitation in the K-1=2 Rydberg superatom is allowed, adding an energy penalty of -Δ, when x_1(-x_2x_3 + x_2 + x_3) = 0. Conversely, no excitation occurs in the Rydberg superatom when x_1(-x_2x_3 + x_2 + x_3) = 1, thereby assigning a positive energy of +Δ to the spin configuration corresponding to x_1(-x_2x_3 + x_2 + x_3) = 1. Thus, the energy function incorporating the additional cost from the Rydberg superatom is given by Ĥ^ neg, (3)_1;23 /Δ = n̂_1 (-n̂_2 n̂_3 + n̂_2 + n̂_3), following the form in Eq. (<ref>). Similarly, the K-1=3 Rydberg superatom in Fig. <ref>(f) generates Ĥ^ neg, (4)_12;34 /Δ = n̂_1 n̂_2 (-n̂_3 n̂_4 + n̂_3 + n̂_4) term. The many-body ground states of the connected Rydberg superatom under the MIS condition are listed in Table <ref>. It is noted that if K=2, the ∏_j=1^K-2n̂_j term in Eq. (<ref>) can be omitted, leaving only the terms -n̂_1n̂_2 + n̂_1 + n̂_2, which match Eq. (<ref>). This indicates that the negative-weight hyperedge in Rydberg HUBO implementation includes the odd-atom quantum wire in Rydberg QUBO implementation, such that Ĥ^ odd_jk=Ĥ^ neg,(K=2)_jk. § PROGRAMMING RYDBERG ATOM GRAPHS FOR HUBO PROBLEMS HUBO problems can be transformed into QUBO problems <cit.> without utilizing the hypergraph implementation. However, converting the higher-order terms in a HUBO problem into quadratic terms for a QUBO problem necessitates additional variables, thereby increasing the number of required atoms. While this increase can be polynomially bounded in specific cases <cit.>, the number of auxiliary variables generally grows exponentially, significantly increasing the resources needed to solve the HUBO problem <cit.>. This resource increase must be considered when transforming HUBO problems to QUBO, as exponential growth in resources can make the problem significantly more challenging to solve. A direct HUBO implementation is thus crucial to avoid the additional atom resources required by the transformation from HUBO to QUBO. Now, the HUBO problem can be encoded into a Rydberg atom graph by utilizing the positive-weight hyperedge defined in Eq.(<ref>) and the negative-weight hyperedge defined in Eq.(<ref>), with appropriately tuned weights: Ĥ_ HUBO = ∑_(j) w^ data_j Ĥ^ data_j + w^ offset_j Ĥ^ offset_j + ∑_E^(K), K w^ pos, (K)_(j,k,l,⋯)Ĥ^ pos, (K)_(j,k,l,⋯) + w^ neg, (K)_(j,k,l,⋯)Ĥ^ neg, (K)_(j,k,l,⋯), where Ĥ^ data and Ĥ^ offset are Hamiltonians corresponding to data and offset qubits, respectively, which are components of the QUBO building blocks <cit.> encoding linear terms. The weights w determine the coupling strength of each term and can be set using local laser beam addressing <cit.> or through duplication <cit.> with 3D stacking <cit.>. To implement HUBO with locally focused light, a weighted detuned beam should be applied to all the atoms in the Rydberg superatom, which serves as the hyperedge gadget. In the following, we consider two experimentally feasible candidates that necessitate higher-order interactions. The first involves the quantum simulation of complex spin systems, and the second relates to the application of HUBO-based adiabatic quantum computing. §.§ Quantum Sierpinski triangle When the downward-facing triangles in a triangular lattice follow the Hamiltonian Ĥ_ ST, Ĥ_ ST=-J∑_▿_jklσ̂^z_jσ̂^z_kσ̂^z_l, the ground state contains an odd number of up-spins |1> in each downward-facing triangle, where σ_z is the pauli z operator. This configuration satisfies σ̂^z_jσ̂^z_kσ̂^z_l=+1 under the ferromagnetic condition J>0, where σ̂_z=-|0><0|+|1><1| is the Pauli operator. The many-body ground state of the spin system forms the shape of a Sierpinski triangle, shown in Fig. <ref>(a), which is a characteristic fractal structure <cit.>. Then, the Hamiltonian Ĥ_ ST can be expressed using a Rydberg atom graph as follows: Ĥ_ ST∝∑_▿_jkl [-4n̂_jn̂_kn̂_l + 2n̂_jn̂_k + 2n̂_kn̂_l + 2n̂_jn̂_l . . -n̂_j - n̂_k - n̂_l ]. Figure <ref>(b) depicts the illustration of the Rydberg atom graph <cit.> representing a unit downward-facing triangle, which is highlighted in Fig. <ref>(a). The black lines represent antiferro (AF)-ordered quantum wires <cit.> that facilitate the establishment of non-local interactions. The Hamiltonian for the unit downward-facing triangle in Eq. (<ref>) can be formulated in the format of Eq. (<ref>) as Ĥ^ ST_▿_jkl = w^ data_j Ĥ^ data_j + w^ data_k Ĥ^ data_k + w^ data_l Ĥ^ data_l + w^ neg,(2)_jlĤ^ neg,(2)_jl + w^ neg,(2)_klĤ^ neg,(2)_kl + w^ pos,(2)_jkĤ^ pos,(2)_jk + w^ neg,(3)_jk;lĤ^ neg,(3)_jk;l, where the weights are w^ dataj = w^ data_k = 3, w^ data_l = 5, w^ pos,(2)_jk = w^ neg,(2)_jl = w^ neg,(2)_kl = 2, and w^ neg,(3)_jkl = 4. In Fig. <ref>(c), a skeleton Rydberg atom graph is depicted where all weights are w = 1, corresponding to the sketch in Fig. <ref>(b). The highlighted region in Fig. <ref>(c) contains the third-order negative hyperedge, utilizing the same configuration as in Fig. <ref>(e). As illustrated in Fig. <ref>(c), weights can be assigned through local beam addressing or by duplicating the hyperedge subgraph. §.§ Factorization problems Using HUBO enables the solution of the factorization problem. The objective of prime factorization is to identify integers P and Q for a given integer N, such that N = P × Q. For example, to factor N = 6 = (110)_2, where (N_m⋯ N_1 N_0)_2 denotes the binary notation and N = N_m 2^m + ⋯ + N_1 2^1 + N_0 2^0, the cost function of the factorization problem is expressed as f_ Fact6 (𝐱) = [6 - (2^1 P_1 + 2^0 P_0)(2^1 Q_1 + 2^0 Q_0) ]^2. To simplify the problem, let P_0 = 1. Subsequently, the cost function transforms to: f_ Fact6 (𝐱) = -20 Q_1 - 11 Q_0 - 16 P_1 Q_1 - 16 P_1 Q_0 + 4 Q_1 Q_0 + 32 P_1 Q_1 Q_0. This constitutes a three-variable optimization problem involving P_1, Q_1, and Q_0. Similar to the previous example, we express the Hamiltonian in the form of Eq. (<ref>): Ĥ_ Fact 6 = w^ data_P_1Ĥ^ data_P_1 + w^ data_Q_1Ĥ^ data_Q_1 + w^ data_Q_0Ĥ^ data_Q_0 + w^ neg,(2)_P_1Q_1Ĥ^ neg,(2)_P_1Q_1 + w^ neg,(2)_P_1Q_0Ĥ^ neg,(2)_P_1Q_0 + w^ pos,(2)_Q_1Q_0Ĥ^ pos,(2)_Q_1Q_0 + w^ pos,(3)_P_1Q_1Q_0Ĥ^ pos,(3)_P_1Q_1Q_0, which involves four different quantum wires and hyperedges: a K=3 order positive hyperedge for 32 P_1 Q_1 Q_0, odd-atom quantum wires for 16(-P_1 Q_1 + P_1 + Q_1) and 16(-P_1 Q_0 + P_1 + Q_0), and an even-atom quantum wire for 4 Q_1 Q_0. The weight factors are w^ data_P_1=32, w^ data_Q_1=36, w^ data_Q_0=27, w^ neg,(2)_P_1Q_1 = w^ neg,(2)_P_1Q_0=16, w^ pos,(2)_Q_1Q_0=4, and w^ pos,(3)_P_1Q_1Q_0=32. The solution to the HUBO is (P_1; Q_1, Q_0) = (1; 1, 0), corresponding to P = (11)_2 = 3 and Q = (10)_2 = 2, satisfying 6 = 2 × 3. This solution can be obtained via quantum adiabatic passage to the MIS condition. § DISCUSSION   To discuss the scaling behavior, we employ a 3D quantum wire lattice structure <cit.>, akin to Figs. <ref>(b)-(c). A line of N data qubits is organized, with quantum wires effectively duplicating this line of data qubits. Hence, the scalability is determined by the number of duplications. In Fig. <ref>, the Rydberg atom graph is illustrated, where black lines represent antiferro (AF)-ordered quantum wires <cit.>, connecting atoms such that vertices effectively link together. Branches extend from AF-ordered quantum wires, running parallel to the vertex line and crossing over other AF-ordered quantum wires to form edges of the graph. In Fig. <ref>, a branch originates from x_1, passes over x_2, ⋯, x_N, and connects to all others, ensuring x_1-to-all connectivity. Similarly, branches are extended from x_2 to x_N-1, and so forth, achieving an all-to-all connected graph. The number of branches scales as O(N) while the height scales as O(1) <cit.>. Therefore, to generate an all-to-all connected unweighted graph, O(N^2) atoms are required<cit.>. In the case of a weighted graph, local addressing and duplication methods necessitate O(N^2) atoms and O(w_V N+w_E N^2), respectively, where w_V and w_E denote the maximum weight of vertices and edges <cit.>. To implement hyperedges of the highest order K on a cubic lattice structure, branches are utilized to represent combinations of vertices. For instance, Fig. <ref> illustrates a hypergraph with the maximum degree K=3, where the colored circles indicate hyperedges. To implement a K=3 order hyperedge, branches are formed by selecting K-1 vertices from N-1. The number of branches scales as O(N^K-1) scaling. Ultimately, constructing a hypergraph of order K requires O(N^K) atoms. Similar to QUBO, in the case of implementing a weighted graph, the required number of atoms is O(∑_K=1 w_E^(K) N^K), where w_E^(K) denotes the maximum weight of the K-th hyperedges, with w_E^(K=1)=w_V and w_E^(K=2)=w_E. If K, the order of the interaction, becomes larger, implementing a Rydberg superatom becomes increasingly challenging. However, using an AF-ordered quantum wire and the vertex splitting method <cit.>, it is possible to implement a superatom-equivalent graph. Figure <ref>(a) shows the target graph corresponding to the positive-weight hyperedge with K=5. Figure <ref>(b) displays the Rydberg atom graph used for hyperedge implementation. A Rydberg superatom is a type of Rydberg atom graph, which can be programmed using Rydberg atom graph QUBO implementation. For a K-atom Rydberg atom graph, the number of vertices is K, so each hyperedge requires O(K^2) atoms. The number of hyperedges in a K-th order hypergraph is O(N^K), thus requiring O(K^2 N^K) atoms. If K is a finite number, the scaling remains O(N^K). § CONCLUSION   Rydberg-atom graph gadgets are introduced to efficiently program K-th order interactions within a Rydberg atom system under the MIS condition. This methodology facilitates the determination of many-body ground states for Ising-type Hamiltonians, which are encoded to tackle HUBO, the higher-order unconstrained optimization problem. This Rydberg-atom approach extends beyond solving classical optimization problems to quantum simulations of spin models. The polynomial scaling of O(N^K), in terms of the number of atoms required for N-vertex hypergraph optimization problems underscores the experimental feasibility of Rydberg atom-based higher-order graph optimization using current and near-term devices. We thank Jinhyung Lee for fruitful discussions. 1 Efimov1970 V. Efimov, “Energy levels arising from resonant two-body forces in a three-body system,” Phys. Lett. B 33, 563-564 (1970). Naidon2017_Efimov P. Naidon, and S. Endo, “Efimov physics: a review,” Rep. Prog. Phys. 80, 056001 (2017). Cooper2004_FQH N. R. Cooper, “Exact Ground States of Rotating Bose Gases Close to a Feshbach Resonance,” Phys. Rev. Lett. 92, 220405 (2004). Levin2005_stringnet M. A. Levin and X.-G. Wen, “String-net condensation: A physical mechanism for topological phases,” Phys. Rev. B 71, 045110 (2005). Levin2005_stringnet2 M. A. Levin and X.-G. Wen, “Colloquium: Photons and electrons as emergent phenomena,” Rev. Mod. Phys. 77, 871 (2005). Buchler2007_3polar H. P. Büchler, A. Micheli and P. Zoller, “Three-body interactions with cold polar molecules," Nat. Phys 3, 726-731 (2007). Schmidt2008_3B2Dlattice K. P. Schmidt, J. Dorier, and A. M. Läuchli, “Solids and Supersolids of Three-Body Interacting Polar Molecules on an Optical Lattice," Phys. Rev. Lett. 101, 150405 (2008). BCS2009_3B1Dlattice B. Capogrosso-Sansone, S. Wessel, H. P. Büchler, P. Zoller, and G. Pupillo, “Phase diagram of one-dimensional hard-core bosons with three-body interactions," Phys. Rev. B. 79, 020503(R) (2009). Bonnes2009_3B1Dlattice L. Bonnes, H. Büchler, and S. Wessel, “Polar molecules with three-body interactions on the honeycomb lattice," New J. Phys. 12, 053027 (2010). Seeley2012_H2 J. T. Seeley, M. J. Richard, and P. J. Love, “The Bravyi-Kitaev transformation for quantum computation of electronic structure,” J. Chem. Phys. 137, 224109 (2012) Hauke2013_Schwinger P. Hauke, D. Marcos, M. Dalmonte, and P. Zoller, “Quantum Simulation of a Lattice Schwinger Model in a Chain of Trapped Ions,” Phys. Rev. X 3, 041018 (2013). Pedersen2021_LGT S. P. Pedersen and N. T. Zinner “Lattice gauge theory and dynamical quantum phase transitions using noisy intermediate-scale quantum devices,” Phys. Rev. B 103, 235103 (2021). Farrell2023_QCD R. C. Farrell, I. A. Chernyshev, S. J. M. Powell, N. A. Zemlevskiy, M. Illa, and M. J. Savage, “Preparations for quantum simulations of quantum chromodynamics in 1+1 dimensions. I. Axial gauge,” Phys. Rev. D 107, 054512 (2023). Rossi2013_hyper M. Rossi, M. Huber, D. Bruß, and C. Macchiavello, Quantum hypergraph states, New J. Phys. 15, 113022 (2013). Liu2022_hyper Z.-W. Liu and A. Winter, Many-Body Quantum Magic, PRX Quantum 3, 020333 (2022). Kitaev2003 A. Y. Kitaev, “Fault-tolerant quantum computation by anyons.” Ann. Phys. 303, 2-30 (2003). Paetznick2013 A. Paetznick and B. W. Reichardt, “Universal Fault-Tolerant Quantum Computation with Only Transversal Gates and Error Correction,” Phys. Rev. Lett. 111, 090505 (2013). Bluvstein2022_Toric D. Bluvstein, H. Levine, G. Semeghini, T. T. Wang, S. Ebadi, M. Kalinowski, A. Keesling, N. Maskara, H. Pichler, M. Greiner, V. Vuletić, and M. D. Lukin, “A quantum processor based on coherent transport of entangled atom arrays,” Nature 604, 451–456 (2022). Google2023 Google Quantum AI, “Suppressing quantum errors by scaling a surface code logical qubit,” Nature 614, 676-681 (2023). Bluvstein2024_logical D. Bluvstein, S. J. Evered, A. A. Geim, S. H. Li, H. Zhou, T. Manovitz, S. Ebadi, M. Cain, M. Kalinowski, D. Hangleiter, J. P. B. Ataides, N. Maskara, I. Cong, X. Gao, P. S. Rodriguez, T. Karolyshyn, G. Semeghini, M. J. Gullans, M. Greiner, V. Vuletić, and M. D. Lukin, “Logical quantum processor based on reconfigurable atom arrays,” Nature 626, 58-65 (2024). Self2024 C. N. Self, M. Benedetti, and D. Amaro “Protecting expressive circuits with a quantum error detection code,” Nat. Phys 20, 219-224 (2024). Iqbal2024 M. Iqbal, N. Tantivasadakarn, R. Verresen, S. L. Campbell, J. M. Dreiling, C. Figgatt, J. P. Gaebler, J. Johansen, M. Mills, S. A. Moses, J. M. Pino, A. Ransford, M. Rowe, P. Siegfried, R. P. Stutz, M. Foss-Feig, A. Vishwanath and H. Dreyer “Non-Abelian topological order and anyons on a trapped-ion processor,” Nature 626, 505-511 (2024). Browaeys2020 A. Browaeys and T. Lahaye, “Many-body physics with individually controlled Rydberg atoms,” Nat. Phys. 16, 132-142 (2020). Jaksch2000_blockade D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, “Fast Quantum Gates for Neutral Atoms”, Phys. Rev. Lett. 85, 2208 (2000). Lukin2001_blockade M. D. Lukin, M. Fleischhauer, R. Cote, L. M. Duan, D. Jaksch, J. I. Cirac and P. Zoller, “Dipole Blockade and Quantum Information Processing in Mesoscopic Atomic Ensembles,” Phys. Rev. Lett. 87, 037901 (2001). UrbanNatPhys2009_blockade E. Urban, T. A. Johnson, T. Henage, L. Isenhower, D. D. Yavuz, T. G. Walker and M. Saffman, “Observation of Rydberg blockade between two atoms,” Nat. Phys. 5, 110-114 (2009). GaetanNatPhys2009_blockade A. Gaëtan, Y. Miroshnychenko, T. Wilk, A. Chotia, M. Viteau, D. Comparat, P. Pillet, A. Browaeys and P. Grangier, “Observation of Collective Excitation of Two Individual Atoms in the Rydberg Blockade Regime,” Nat. Phys. 5, 115-118 (2009). Glaetzle2017_LHZ A.W. Glaetzle, R. M. W. van Bijnen, P. Zoller and W. Lechner, “A coherent quantum annealer with Rydberg atoms,” Nat. Commun. 8, 15813 (2017). Gambetta2020_3Ryd F. M. Gambetta, W. Li, F. Schmidt-Kaler, and I. Lesanovsky, “Engineering NonBinary Rydberg Interactions via Phonons in an Optical Lattice," Phys. Rev. Lett. 124, 043402 (2020). Pohl2009_AB T. Pohl, and P. R. Berman, “Breaking the dipole blockade: nearly resonant dipole interactions in few-atom systems,” Phys. Rev. Lett. 102, 013004 (2009). Faoro2015_Forster R. Faoro, B. Pelle, A. Zuliani, P. Cheinet, E. Arimondo, and P. Pillet “Borromean three-body FRET in frozen Rydberg gases,” Nat. Commun. 6, 8173 (2015). Ryabtsev2018_Forster I. I. Ryabtsev, I. I. Beterov, D. B. Tretyakov, E. A. Yakshina, V. M. Entin, P. Cheinet, and P. Pillet “Coherence of three-body Förster resonances in Rydberg atoms,” Phys. Rev. A 98, 052703 (2018). Gurian2012_Forster J. H. Gurian, P. Cheinet, P. Huillery, A. Fioretti, J. Zhao, P. L. Gould, D. Comparat, and P. Pillet, “Observation of a Resonant Four-Body Interaction in Cold Cesium Rydberg Atoms,” Phys. Rev. Lett 108, 023005 (2012). NEMJ2022_fractal N. E. Myerson-Jain, S. Yan , D. Weld, and C. Xu “Construction of Fractal Order and Phase Transition with Rydberg Atoms,” Phys. Rev. Lett 128, 017601 (2022). Kim2022_wire M. Kim, K. Kim, J. Hwang, E.-G. Moon, and J. Ahn, “Rydberg quantum wires for maximum independent set problems,” Nat. Phys 18, 755-759 (2022). Byun2022PRXQ_PlatonicSolid A. Byun, M. Kim, and J. Ahn, “Finding the maximum independent sets of Platonic graphs using Rydberg atoms,” PRX Quantum 3, 030305 (2022). Qiu2020 X. Qiu, P. Zoller, and X. Li , “Programmable Quantum Annealing Architectures with Ising Quantum Wires,” PRX Quantum 1, 020311 (2020) Lee2016_3Drearrange W. Lee, H. Kim, and J. Ahn, “Three-dimensional rearrangement of single atoms using actively controlled optical microtraps,” Opt. Express. 24(9), 9816 (2016). Barredo2018_3D D. Barredo, V. Lienhard, S. de. Léséleuc, T. Lahaye, and A. Browaeys, “Synthetic three-dimensional atomic structures assembled atom by atom,” Nature. 561, 79-82 (2018). Kim2020_3DRyd M. Kim, Y. Song, J. Kim, and J. Ahn, “Quantum Ising Hamiltonian Programming in Trio, Quartet, and Sextet Qubit Systems," PRX Quantum 1, 020323 (2020). Song2021_Cayleytree Y. Song, M. Kim, H. Hwang, W. Lee, and J. Ahn, “Quantum simulation of Cayley-tree Ising Hamiltonians with three-dimensional Rydberg atoms,” Phys. Rev. Res. 3, 013286 (2021). Byun2023 A. Byun, J. Jung, K. Kim, M. Kim, S. Jeong, H. Jeong, and J. Ahn, “Rydberg-Atom Graphs for Quadratic Unconstrained Binary Optimization Problems,” Adv. Quantum Technol. 2300398 (2024). Liu2017_4B J. Liu, Y. Qi, Z. Y. Meng, and L. Fu, “Self-learning Monte Carlo method,” Phys. Rev. B 95, 041101(R) (2017). Dudin2012_superatom Y. O. Dudin, L. Li, F. Bariani and A. Kuzmich, “Observation of coherent many-body Rabi oscillations," Nat. Phys. 8, 790–794 (2012) Ebert2015_superatom M. Ebert, M. Kwon, T. G. Walker, and M. Saffman, “Coherence and Rydberg Blockade of Atomic Ensemble Qubits," Phys. Rev. Lett. 115, 093601(2015). Zeiher2015_superatom J. Zeiher, P. Schauß, S. Hild, T. Macrì, I. Bloch, and C. Gross “Microscopic Characterization of Scalable Coherent Rydberg Superatoms," Phys. Rev. X 5, 031015 (2015). Labuhn2016_Ising H. Labuhn, D. Barredo, S. Ravets, S. de. Léséleuc, T. Macrì, T Lahaye and A. Browaeys, “Tunable two-dimensional arrays of single Rydberg atoms for realizing quantum Ising models," Nature, 534, 667-670 (2016). Derrida1980 B. Derrida, “Random-Energy Model: Limit of a Family of Disordered Models,” Phys. Rev. Lett 45, 79 (1980). Derrida1981 B. Derrida, “Random-energy model: An exactly solvable model of disordered systems,” Phys. Rev. B 24, 2613 (1981). Pichler2018_MIS H. Pichler, S.-T. Wang, L. Zhou, S. Choi, M. D. Lukin, “Quantum Optimization for Maximum Independent Set Using Rydberg Atom Arrays,” ArXiv:1808.10816 (2018). Zaman_IEEE_2022 M. Zaman, K. Tanahashi and S. Tanaka, “PyQUBO: Python Library for Mapping Combinatorial Optimization Problems to QUBO Form,” IEEE Transactions on Computers, 71, 4, pp. 838-850 (2022) Mandal_2020 A. Mandal, A. Roy, S. Upadhyay and H. Ushijima-Mwesigwa, “Compressed Quadratization of Higher Order Binary Optimization Problems,” ArXiv:2001.00658 (2020). DwaveHandbook D-Wave Systems Inc, “D-Wave System Documentation: Problem-Solving Handbook,” https://docs.dwavesys.com/docs/latest/doc_handbook.html (2024). Boros_DAM_2002 E. Boros and P. L. Hammer “Pseudo-Boolean optimization,” Discrete Applied Mathematics 123, 1, 155-225 (2002) Rodrigueze-Heck_thesis_2018 E. Rodrìguez-Heck “Linear ad Quadratic Reformulations of Nonlinear Optimization Problems in Binary Variables,” PhD Dissertation, Liege University (2018). Labuhn2014_addressing H. Labuhn, S. Ravets, D. Barredo, L. Béguin, F. Nogrette, T. Lahaye, and A. Browaeys, “Single-atom addressing in microtraps for quantum-state engineering using Rydberg atoms,” Phys. Rev. A 90, 023415 (2014). Omran2019_20addressing A. Omran, H. Levine, A. Keesling, G. Semeghini, T. T. Wang, S. Ebadi, H. Bernien, A. S. Zibrov, H. Pichler, S. Choi, J. Cui, M. Rossignolo, P. Rembold, S. Montangero, T. Calarco, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, “Generation and manipulation of Schrödinger cat states in Rydberg atom arrays,” Science 365, 570-574 (2019). Graham2022_MAXCUT T. M. Graham, Y. Song, J. Scott, C. Poole, L. Phuttitarn, K. Jooya, P. Eichler, X. Jiang, A. Marra, B. Grinkemeyer, M. Kwon, M. Ebert, J. Cherek, M. T. Lichtman, M. Gillette, J. Gilbert, D. Bowman, T. Ballance, C. Campbell, E. D. Dahl, O. Crawford, N. S. Blunt, B. Rogers, T. Noel, and M. Saffman, “Multi-qubit entanglement and algorithms on a neutral-atom quantum computer,” Nature 604, 457-462 (2022). deOliveira2024_MWIS A. G. de Oliveira, E. Diamond-Hitchcock, D. M. Walker, M. T. Wells-Pestell, G. Pelegrí, C. J. Picken, G. P. A. Malcolm, A. J. Daley, J. Bass, and J. D. Pritchard “Demonstration of weighted graph optimization on a Rydberg atom array using local light-shifts,” ArXiv: 2404.02658 (2024). Newman1999_ST M. E. J. Newman and C. Moore “Glassy dynamics and aging in an exactly solvable spin model,” Phys. Rev. E 60, 5068 (1999).
http://arxiv.org/abs/2407.02791v1
20240703033605
Model-Enhanced LLM-Driven VUI Testing of VPA Apps
[ "Suwan Li", "Lei Bu", "Guangdong Bai", "Fuman Xie", "Kai Chen", "Chang Yue" ]
cs.SE
[ "cs.SE", "cs.AI" ]
Learning Positional Attention for Sequential Recommendation Fan LuoJuan ZhangShenghui Xu July 8, 2024 =========================================================== § ABSTRACT The flourishing ecosystem centered around voice personal assistants (VPA), such as Amazon Alexa, has led to the booming of VPA apps. The largest app market Amazon skills store, for example, hosts over 200,000 apps. Despite their popularity, the open nature of app release and the easy accessibility of apps also raise significant concerns regarding security, privacy and quality. Consequently, various testing approaches have been proposed to systematically examine VPA app behaviors. To tackle the inherent lack of a visible user interface in the VPA app, two strategies are employed during testing, i.e., chatbot-style testing and model-based testing. The former often lacks effective guidance for expanding its search space, while the latter falls short in interpreting the semantics of conversations to construct precise and comprehensive behavior models for apps. In this work, we introduce , a model-enhanced large language model (LLM)-driven VUI testing framework. leverages LLMs' strong capability in natural language processing to compensate for semantic information loss during model-based VUI testing. It operates by prompting LLMs to extract states from VPA apps' outputs and generate context-related inputs. During the automatic interactions with the app, it incrementally constructs the behavior model, which facilitates the LLM in generating inputs that are highly likely to discover new states. bridges the LLM and the behavior model with innovative techniques such as encoding behavior model into prompts and selecting LLM-generated inputs based on the context relevance. is benchmarked on 4,000 real-world Alexa skills, against the state-of-the-art tester Vitas. It achieves 15% higher state space coverage compared to Vitas on all types of apps, and exhibits significant advancement in efficiency. § INTRODUCTION With the prevalence of smart speakers, voice personal assistants (VPA) have permeated various aspects of people's lives. Prominent examples include Amazon Alexa, Google Assistant, and Apple Siri, which have been widely used for assisting smart speaker users. Centered around them, numerous applications (or VPA apps for short) have been developed to provide various functionalities, such as accessing news, entertainment, and controlling devices. VPA apps are characterized by the voice user interface (VUI), which enables user interaction solely through verbal conversations. The major VPA service providers have established VPA app stores for efficient app distribution. Through them, third-party developers can unload their apps, and users can invoke apps without installation, simply by calling their invocation names. Such openness and ease of access have led to the widespread popularity of VPA apps. For example, the skills store, the largest VPA app store, boasts over 200,000 apps <cit.>. However, there have been concerns raised regarding their security, privacy and quality. A considerable number of VPA apps are found malicious as a result of untrustworthy skill certification process <cit.>. Prior works have discovered that malicious VPA apps can eavesdrop <cit.> or ask users' privacy information without permissions <cit.>. The behavior of several VPA apps contradicts their privacy policies <cit.>. Additionally, a large number of apps exhibit poor quality, such as terminating unexpectedly <cit.> or failing to understand common user inputs <cit.>. To detect such problems, a thorough exploration of VPA apps' behavior is necessary. Existing methods mainly employed strategies of depth-first search based chatbot-style testing <cit.> or model-based testing (MBT) <cit.>. Since VPA apps cannot roll back to the previous interface, the exploration efficiency can be affected especially when the depth-first search strategy is taken. Such testers have to start from the beginning after searching one path, resulting in repeated tests. They can work effectively on simple apps, but may suffer from low efficiency when facing complex apps. In addition, previous MBT approach falls short in understanding and utilizing semantic information when exploring apps' behavior and constructing the model. Figure <ref> shows two communication logs that illustrate the impact of semantic information on efficiently testing VPA apps. In figure <ref>, between the candidate inputs “Goodbye” and “Service Times”, “Service Times” is more likely to lead to unseen app behavior. Therefore, “Service Times” should have higher initial priority than “Goodbye”. Without considering the semantic relevance of inputs, it is likely that “Goodbye” is selected and the app stops. In figure <ref>, the two apps' outputs represent similar functional semantics but are expressed differently. The user inputs “walk” at the first time, so other inputs like “play” should have higher priority at the second time. However, if different outputs are considered as different functionalities, purposes or context, the same input “walk” will be selected at the second time for thorough testing. The ignorance of outputs' semantic similarity at the level of functionality, purpose and context causes repeated tests. Therefore, the semantic information is crucial in efficient testing of VPA apps. As the large language models (LLM) are known for their strong natural language understanding and processing abilities <cit.>, and previous studies have found that they can be used for downstream tasks with in-context learning <cit.>, we adopt the LLM to drive the testing process to compensate for semantic information loss during the model-based VUI testing. However, employing the LLM for the VUI testing presents the following three challenges: Challenge 1: LLMs can be used to supplement the semantic loss during the model-based testing of VPA apps, but it is difficult for LLMs to maintain the state information of VPA apps accurately. On the one hand, when the testing goes deeper and the context becomes larger than the LLM's limitation, the information required for LLMs to generate an accurate model is incomplete. On the other hand, LLMs can hardly generate a precious model especially when the VPA apps' behavior is complex. However, a wrong model can greatly affect the following exploration. Challenge 2: The results generated by LLMs can be redundant and repeated under VPA apps' context. For example, if the LLM is asked to generate context-related inputs for a given VPA apps' outputs (see figure <ref>), it tends to generate long results, but most VPA apps have difficulty processing these inputs. If state information and exploration strategy is not provided, the LLM can generate repeated inputs for the same state, affecting the testing efficiency. For these reasons, prompts should be carefully designed to help the LLM generate formalized and efficient results. Challenge 3: LLM's results are not entirely reliable due to its unexplainability and uncertainty. For example, even if LLMs are prompted to return simple and concise results, they may still generate results that VPA apps cannot understand. Therefore, we need to filter out the unreliable results based on the feedback from VPA apps and our domain knowledge. To address the above three challenges, we propose the following solutions. To tackle Challenge 1, we split the complex LLM-driven model-based testing tasks into three phases: states extraction, input events generation, and state space exploration to increase the accuracy of model construction. In each phase, the LLM only extracts the state and generate input events for the real-time VPA apps' output, so the length of prompt will not exceed the context limitation. Besides, the LLM is only used to make up for the semantic loss during the model construction and exploration, such as merging outputs with similar semantics to one state, generating context-related inputs and selecting an input for efficient exploration, while the model information is stored and maintained locally. For addressing Challenge 2, we embed the information provided by the behavior model into the prompts to help the LLM generate efficient results and avoid repeated tests. Since the complete behavior model is complex and occupies many tokens, adding it to the prompt not only interferes with the extraction of core information but also brings unnecessary expenses. Therefore, we only extract phase-specific information to the prompt. For example, only the state list is provided in the states extraction phase. Meanwhile, by designing appropriate few shots, we enable the LLM to formalize outputs. For the state space exploration, we implement the step-by-step chain-of-thought strategy to guide the LLM in parsing the behavior model and making decisions. To handle Challenge 3, we establish specific rules considering both the behavior model information and VPA apps' feedback to check whether the LLM's outputs at each phase meet our requirements. If they do not pass the checks, we provide feedback prompts for LLMs to regenerate the results. Based on these ideas, we develop the (model-Enhanced Llm drivEn Vpa App's vui TEsting) framework. As a model-based testing method, the framework is divided into three phases: states extraction, input events generation, and state space exploration. These phases are enhanced by the LLM to achieve accurate state extraction and efficient state space exploration. In the states extraction phase, the LLM is prompted to merge the VPA app's outputs with existing states in the behavior model or create a new state. In the input events generation phase, the LLM generates context-related input events based on VPA app's outputs. The states and input events generated by the LLM are used to update the behavior model. Throughout the state space exploration process, the current-state related information from the behavior model is extracted and used to guide the LLM to select an input event for efficient exploration. Our contributions are summarized as follows: * We propose to use the LLM to enhance the model-based testing of VPA apps. This approach combines the model guidance of MBT with the NLP capabilities of the LLM. The LLM's results are used for constructing accurate behavior models and efficiently exploring the state space. * We present a specific feedback mechanism to filter the LLM's unreliable results and guide LLMs for corrections. Based on the behavior model information and VPA apps' outputs, we filter out mismatched states, invalid input events and inefficient exploration strategies. * We implement , and validate its coverage, efficiency, and generality. It surpassed the state-of-the-art approach Vitas in state space coverage and efficiency. Ultimately, tests 4,000 Alexa skills and covers 15% of more state space than Vitas. § BACKGROUND §.§ VPA Apps and Behavior Model VPA apps are apps based on smart speakers. Users interact with VPA apps through voice, so the interface of VPA apps is called the voice user interface (VUI). VUIs are typically free of visible graphical interfaces. Therefore, the exchange of all information are purely through voice. While the VUI brings convenience, its invisible feature introduces a range of quality and security concerns, such as unexpected exits <cit.>, privacy violations <cit.>, and expected apps started <cit.>. For this reason, thoroughly exploring VPA apps' behavior while testing the VUI's quality and security issues is of paramount importance. However, VPA apps are not open source for normal testers. A VPA app is composed of the front-end interaction model and the back-end processing code. The development platform provides storage for the front-end interaction model, while the back-end code of VPA apps is stored on the developer's server. As a result, dynamic testing is a commonly used method for testing the VUI of VPA apps. Since the front-end interaction model of VPA apps is designed based on implicit models <cit.>, we propose to use the model-based testing approach to explore the behavior of VPA apps. VPA apps' outputs express their functionalities and purposes. By understanding and analyzing the outputs, states can be extracted. Apps' transfer from one state to another is only triggered by users' inputs. As a result, VPA apps' behavior can be described by the finite-state machine (FSM), which has been proved to be applicable for constructing VPA apps' behavior models <cit.>. A finite-state machine consists of five parts, described as FSM = (Q, Σ,δ,s_0,F). Among them: * Q represents the set of states. Apps' outputs are mapped to states. * Σ represents the set of input events. Users' inputs are mapped to input events. * F is the set of final states, and satisfies F ⊆ Q. VPA apps' final outputs are mapped to final states. * s_0 is the initial state and satisfies s_0 ∈ Q. The initial state is always set as “START”. * δ: Q×Σ→ Q represents a transition function. The input event e that triggers the transition from the state s_0 to the states s_1 is represented as δ(s_0, e) = s_1. §.§ Large Language Model Large Language Model (LLM) is built on the transformer architecture. LLMs have been proved with strong natural language processing capabilities <cit.>. Compared to general language models (LM), LLMs have a vast number of parameters and undergo extensive text training. Due to these characteristics, LLMs can be directly applied to downstream tasks. In addition, methods like fine-tuning <cit.> and in-context learning <cit.> can improve LLM's capabilities for specific downstream tasks. In the in-context learning technique, users only need to provide few samples as a reference for the downstream task, which implies that LLMs can handle downstream tasks through learning from a small dataset. LLMs can be categorized into three types based on the transformer architecture: encoder-only, encoder-decoder, and decoder-only. Encoder-only and encoder-decoder are suitable for infilling tasks, while decoder-only models are better at text generation tasks. Considering that our tasks involve the model generation and exploration, we prefer to adopt decoder-only models. Popular decoder-only models include OpenAI's GPT series <cit.>, Meta's Llama series <cit.>, etc. Additionally, there are models specifically designed for code generation tasks such as Codex <cit.> and Codegen <cit.>. § LLM DRIVEN MODEL CONSTRUCTION AND EXPLORATION §.§ Overview As a model-based testing framework, works by constructing the model according to VPA apps' behavior and guiding the exploration based on this model. The behavior model is built by mapping VPA apps' outputs to states and users' inputs to input events (see Section <ref>). As states reflect VPA apps' functionalities, purposes and behavior, different outputs with similar semantics (e.g., functionalities, purposes and behavior) should be mapped to one state. We call these outputs as semantically similar outputs under the context of VPA apps' behavior. Besides, users' inputs should be context related to the apps' outputs so that meaningful states can be discovered. Overall, the states extraction and input events generation require natural language processing, which is the strength of the LLM. In addition, the LLM has proved its ability in understanding graphs <cit.> and reasoning with prompt engineering techniques such as in-context learning and chain-of-thought <cit.>. Our state space exploration task is basically an input event selection task considering factors like historical transitions, invocation frequency and relevance to the current state based on understanding the behavior model (i.e., a graph). Given current state related information from the behavior model, the LLM can be used to select input events for further exploration of VPA apps' behavior. In traditional model-based testing, the model is firstly built and then used to guide the exploration of the state space. However, when testing VPA apps, the initial model is difficult to acquire before interacting with VPA apps as the VPA apps are closed-source and most documents only provide a few lines to describe their functionalities. To solve that problem, we construct VPA apps' behavior model on-the-fly, which means the model is built during the interaction. The behavior model is finally embedded into the prompt to guide the LLM in extracting states and selecting efficient input events for exploration. To save tokens, only phase-specific behavior model information is provided. Based on these ideas, we propose , a model-enhanced LLM driven model-based testing method for VUI testing of VPA apps. Figure <ref> shows the framework of . consists of three phases, and they are all performed by LLMs. The first two phases are for model construction, including states extraction and input events generation. In the third phase, the LLM selects an input event to explore the state space based on the information provided by the behavior model. Since we adopt an on-the-fly model construction approach, these three phases are executed one by one repeatedly. The main processes of these three phases are described below. Phase 1: States extraction. In this phase, VPA apps' outputs and existing states in the behavior model are embedded into the prompt. The LLM decides whether to merge the VPA apps' output with existing states or generate a new state for it. We expect the LLM to map outputs with similar semantics to the same state. A state filter is used to filter out mismatched states generated by the LLM. Phase 2: Input events generation. The VPA apps' real-time output is input to the LLM, which generates all possible context-related input events for this output. We expect the input events generated by the LLM to be semantically related to the VPA apps' output and help discover meaningful new states. An input checker is implemented to check the validation of input events according to VPA apps' feedback. Phase 3: State space exploration. The current state and current-state-related information in the behavior model are input to the LLM. The LLM is expected to select one input event by considering factors such as the invocation frequency, historical transitions and relevance to the current state to explore the state space efficiently. Based on the invocation frequency and history transitions, we search whether there is a better input in the input event set. If there is one, we reject the LLM's results and ask for another input event. Whenever we receive an output from VPA apps, we execute the first and second phases to generate states and input events. The states and input events are used for the behavior model construction. Subsequently, we extract information related to the current state from the behavior model and embed it to the prompt, and the LLM selects the most suitable input event at the third phase. After that, the selected input event is fed back to VPA apps and wait for the next output. The whole process will be continued until the time limit is reached or the VPA apps quit. Due to the unexplainability of the LLM, we establish the feedback mechanism to check and filter out its results. Results that do not meet our requirements are rejected, and the reasons are returned to the LLM for regenerating the results. In the following sections, we will introduce the prompts and feedback mechanisms of these three phases respectively. To help express the implementation of these three phases clearly, we introduce the following terms: * app's output: the real-time VPA apps' output. It will be used to extract states. Context-related inputs are generated based on its content. * state: the state extracted from app's output. * state_pre: the previous explored state. * state_next: the next explored state. * inputs: the set of context-related inputs generated for app's output. * input: the input selected by the LLM at state to communicate with the VPA apps. * input_pre: the previous selected input. * model: the behavior model. * model.Q: the set of states in the behavior model. * model.Σ(s): the input events information of state s, including their invocation times. * model.δ(s): the set of transition functions that start from state s. §.§ States Extraction Similar semantics (e.g., functionalities, purposes and context) of VPA apps can be expressed in different ways. The LLM should merge outputs with similar semantics to one state. For each app's output, the LLM is supposed to find a semantic similar state from model.Q or generate a new state. For this reason, only the model.Qis required in this phase. So the input of this phase includes the app's output and model.Q. To avoid redundant results, the LLM is required to only output the state of the given apps' output. To assist the LLM in better understanding this task and formalizing its outputs, we employ the in-context learning strategy. Few shots are in the form of “Input: app's output, model.Q” and “Output: state” pairs. As the LLM's results are not trustworthy, we establish a state filter to filter out mismatched states. If a state is mismatched, we provide feedback prompts to request another state from the LLM. The prompts of phase 1 are displayed in Table <ref>. When we first use the LLM for states extraction, we use *LONG PROMPT*. In *LONG PROMPT*, we instruct the LLM to map semantically similar outputs to one states in the behavior model (labeled as *MAP INSTRUCTION*). Few shots are provided for LLMs to understand the state extraction task (labeled as *FEW SHOTS*). Subsequently, we request it to return the corresponding state in the model.Q for the app's output. In other cases, we will use *SHORT PROMPT*. *SHORT PROMPT* only includes the app's output and model.Q. After *LONG PROMPT* or *SHORT PROMPT*, the LLM will generate the state for app's output. If state is rejected by the state filter, we will return *FEEDBACK PROMPT*. Figure <ref> illustrates the state filter in the states extraction phase. Firstly, we check whether state ∈ model.Q or state == app's output. If neither of them is true, we return *NO STATE ERROR*. Otherwise, we proceed to the second step of the check. If state ∈ model.Q, we check whether state and app's output have the same input events (see section <ref> for the generation of inputs). If they have different input events, we return *NOT MERGE SUGGESTION*, otherwise we move to the third step. If state == app's output, we find whether there exists a state_x in model.Q that satisfies the transition function δ(state_pre, input_pre) = state_x and δ(state_pre, input_pre) = state. If such a state_x can be found, we consider that state should be merged to state_x. So we return *SHOULD MERGE SUGGESTION*. §.§ Input Events Generation In section <ref>, the state for the app's output is extracted. To further explore VPA apps' behavior, context related inputs should be generated. Each state has its independent context related input event set, as we consider different states as different contexts. To ensure the context relevance, the LLM is also used in this phase. The inputs generated for the app's output is also the input event set of state. VPA apps expect users to give short and simple inputs, but LLMs tend to generate long and redundant inputs, which most VPA apps cannot understand. To solve this problem, we offer few shots that include five types of VPA apps' outputs (i.e., yes-no question, selection question, instruction question, Wh question and mixed question <cit.>). For the mixed question, we summarize three most common patterns, they are instruction + selection question, Wh + selection question and yes-no + selection question. We provide at least one example for each type of questions in the few shots. They are in the form of “Input: apps' output” and “Output: inputs” pairs. In addition, we set an input checker to check the validation of the input events. The state_next is used to judge whether the input events generated by the LLM are context related. If state_next is equal to state or expresses confusion, we feedback the information to request other inputs. The prompts are displayed in Table <ref>. When we ask the LLM to generate input events for the first time, we use *LONG PROMPT*, which provides *FEW SHOTS* and instructs the LLM to find inputs to the app's output. In other cases, we use *SHORT PROMPT*, which only contains the app's output. After input from inputs is selected (see Section <ref>) and sent to the VPA app, the app will soonly give another output. Based on the content of that output, we judge the validity of input. Figure <ref> illustrates the workflow of the input checker. Firstly, we check whether inputs is empty. If it is, we will return *EMPTY ERROR*. If any input event input from the inputs is given to the VPA app and the next state state_next == state or state_next expresses apps' confusion, input is considered as an invalid input event. In this case, we will return *INVALID SUGGESTION*. §.§ State Space Exploration The aim of this phase is to efficiently explore the state space based on the information provided by the behavior model. This is done by finding an input event that is most likely to discover new states (i.e., functionalities) at each state. It is a decision-making problem considering factors such as invocation frequency, historical transitions, and relevance to the current state based on the behavior model (essentially a graph). Due to the fact that LLMs have developed their abilities in understanding graphs <cit.>, and prompt engineering techniques like chain-of-thought can improve the LLM's explainability and capability to handle reasoning tasks <cit.>, the LLM is used for the state space exploration. In the previous two phases, we extract the state and generate the inputs for the apps' outputs. They are used to update the behavior model. The model information is then used to guide the state space exploration. For this reason, the input of this step includes the state and the state related information in the model. The state related information includes the model.δ(state) and the model.Σ(state) (invocation times of each input is updated after it is sent to the app). To improve the LLM's capability of this decision-making task, we employ a strategy combining in-context learning and chain-of-thought. We prompt the LLM to think step-by-step and show its thinking process. In step 1, the LLM is asked to remove the input events that lead to duplicate or wrong state from the historical transitions. In step 2, the LLM finds a never-invoked input event that is most context related. In step 3, the LLM finally chooses one input event from the never-invoked context-related input event in step2 and the invoked and valid (i.e., does not lead to a state that is same as before or represent apps' confusion) input event. Few shots are provided in the form of “Input: state, model.δ(state), model.Σ(state)”, “Thought: step1: xxx, step2: xxx, step3: xxx” and “Output: input” triplets. The LLM is expected to output its thinking process along with the selected input. Similarly, the input given by the LLM will be evaluated and the feedback will be returned. The prompts in this phase are displayed in Table <ref>. The *LONG PROMPT* is used for the first time. *LONG PROMPT* initially outlines the composition and representation of the behavior model (labeled as *MODEL DESCRIPTION*). Then, it offers step-by-step guide of the reasoning process (labeled as *STEP-BY-STEP*). Meanwhile, few shots with the thinking process (labeled as *FEW SHOTS*) are provided. Finally, the LLM is asked to select an input from the inputs to discover new states based on historical transitions in model.δ(state), invocation frequency in model.Σ(state) and relevance to state. In other cases, we will use *SHORT PROMPT*, which only contains state, model.δ(state) and model.Σ(state). After the LLM selects the input, we evaluate it by finding whether there is a probably better input event and return the *FEEDBACK PROMPT*. Figure <ref> illustrates the process of better inputs checker that evaluates the input and return different *FEEDBACK PROMPT* in the third phase. Firstly, the better input checker checks if input ∈ inputs. If not, we return *NO INPUT ERROR*. Otherwise, it determines whether there is a better input event input_x compared with input based on the invocation frequency and history transitions. If input_x is valid and invoked less frequently than input, then input_x is better than input. If input is invalid but input_x is valid, then input_x is also a better choice. In both cases, we return *BETTER INPUT SUGGESTION*. The input that passes the above checks is sent to the VPA app. § EVALUATION We implement based on GPT-4 <cit.> and analyze its coverage and efficiency. The performance of is compared with the state-of-the-art model-based VUI testing method Vitas <cit.>. Besides, chatbot-style testers are classic VPA apps testing approach, but Vitas was evaluated to outperform traditional chatbot-style testers in coverage and efficiency. However, with the development of LLMs, LLMs as chatbots may have stronger VPA apps testing abilities, so GPT4(chatbot) is also set as a baseline. Additionally, we conduct ablation experiments to assess the contribution of 's each phase to the final state space coverage. We also implement on Llama2-70b-chat <cit.> and evaluate 's applicability on different LLMs. Finally, we conduct a large-scale testing on Alexa skills to evaluate 's generality <cit.>. §.§ Settings Dataset: We use the large scale dataset of Vitas <cit.> as our basic dataset. From this dataset, we filter out skills with no ratings. Then, we roughly confirm 4,000 skills with consistent behavior to form the large-scale dataset. These 4,000 skills cover all categories on the Amazon skills website. For the use of conducting an intensive evaluation, we also build a benchmark with 50 Alexa skills. These 50 skills are checked to be stable and available. Baselines: We compare with two baselines, as shown in table <ref>. The simulator provided by Amazon<cit.> is used as our testing platform. The evaluation was conducted on the Ubuntu 18.04.4 machines with AMD EPYC 7702P 64-Core Processor CPU@1.996GHz and 4GB RAM. Coverage metrics: VPA apps are not open source, so the ground truth of the entire state space of certain VPA apps cannot be acquired in advance. Furthermore, as merges states with similar semantics to avoid repeated testing while Vitas does not, we call the states generated by as semantic states, while the ones discovered by Vitas as sentence states in the evaluation. Consequently, to ensure a uniform measurement, we use to process the states discovered by Vitas, and merge them to semantic states correspondingly. Then, we use the number of the unique semantic states achieved by and all the baselines used in certain evaluations as the total state space for each evaluation respectively for a fair comparison. §.§ Evaluation of We aim to address the following research questions: RQ1: How does the semantic state coverage and efficiency improve when using GPT-4 to enhance the model construction and exploration? RQ2: Do all phases in contribute to the state exploration of VPA apps? RQ3: How effective is 's framework when applied to other LLMs? RQ4: How is the coverage rate of on all types of skills compared with Vitas? §.§.§ Study1: Coverage and efficiency We set the time limit as 10 minutes for to test each skill. The baselines are allowed to test skills using the same interaction rounds (an input and an output form an interaction round) as . Firstly, we compare the sentence states and semantic states achieved by and the baselines. Then, we compare their average semantic state coverage with interaction rounds. Figure <ref> shows the sentence states and semantic states maintained by and baselines. It suggests that the sentence states can be greatly compressed when semantic information is considered. merges outputs with similar semantics to one state for testing, which greatly reduces the original state space. In addition, achieves more sentence and semantic states than the baselines. In order to evaluate 's coverage ability along with the efficiency, we calculate the average semantic state coverage of and baselines on the benchmark of varying interaction rounds in figure <ref>. The horizontal axis represents the average semantic state space rate, while the vertical axis denotes the number of interaction rounds. When the interactions go deeper, the advantage of over Vitas and GPT4(chatbot) is more evident. After only 3 rounds of interactions, shows its leading exploration efficiency and stays ahead until the end. Finally, can achieve over 80% of average semantic state coverage after only 20 rounds of interactions, while Vitas and GPT4(chatbot) can only achieves a final coverage of 68% and 45% respectively. Among the baselines, the traditional model-based tester Vitas has relatively higher performance. However, Vitas did not exploit the semantic information during VUI testing to help the model construction and exploration, so it lags behind in terms of semantic state coverage. Although GPT-4 is a strong LLM, directly using it as a chatbot for VPA apps testing performs worse than Vitas. GPT4(chatbot) lacks the guidance for state space coverage, which prevents it from discovering deep states. Enhanced with , the LLM's performance in semantic state coverage is greatly improved. Answers to RQ1: The sentence states can be greatly reduced when semantic information is considered. Compared with baselines, achieves more sentence and semantic states. With the increase of interaction rounds, shows evident advantage of semantic state coverage and efficiency compared with Vitas and GPT4(chatbot). §.§.§ Study2: Ablation Studies To validate the rationality of prompting the LLM and returning the feedback at each phase, we conduct an ablation study. In “w/o States extraction” (Section <ref>), “w/o Input events generation” (Section <ref>) and “w/o State space exploration” (Section <ref>), we remove the entire *FEEDBACK PROMPT*, and the in-context learning, chain-of-thought and behavior model information of the corresponding phase in the *LONG PROMPT*. We then let them test the benchmark using the same interaction rounds as and compare their performance on the average semantic state coverage rate. Figure <ref> shows the average semantic state coverage rate of , w/o States extraction, w/o Input events generation and w/o State space exploration on the benchmark. The results prove that the elimination of any phase could lead to a decrease in state space coverage. Among them, removing the Input events generation phase has the largest impact on the final coverage, as the original input events generated by the LLM are commonly misunderstood by VPA apps. Eliminating the w/o State space exploration phase also influences the performance. That is because the behavior model information and chain-of-thought strategy provides the guidance for LLMs to explore efficiently. Without the States extraction phase, the semantic state space is largely redundant, resulting in repeated tests of semantically similar states. Answers to RQ2: After carrying out the ablation study on 's three phases, we find that each of 's three phases contribute to the overall semantic state coverage rate. Removing the input events generation phase has the greatest impact on the final coverage rate. §.§.§ Study3: Applicability We implement on Llama2-70b-chat <cit.>, referred to as -Llama2-70b-chat, to evaluate the performance of when it is implemented by other LLMs. As a comparison, we also use Llama2-70b-chat as a chatbot to test VPA apps, and label it as Llama2-70b-chat(chatbot). By comparing the average semantic state coverage rate of -Llama2-70b-chat, Vitas and Llama2-70b-chat(chatbot), we evaluate the applicability of . Similarly, -Llama2-70b-chat tests skills in the benchmark for 10 minutes. Then, Vitas and Llama2-70b-chat(chatbot) tests the benchmark using the same interaction rounds as -Llama2-70b-chat. Figure <ref> shows that -Llama2-70b-chat outperforms Vitas and Llama2-70b-chat(chatbot) on the average semantic state coverage rate. 's ability can be influenced by the LLM on which it is implemented on, but the result shows that -Llama2-70b-chat still has an advantage over the SOTA tester Vitas. Besides, increases Llama2-70b-chat's coverage of VPA apps' state space by about 30%. Overall, 's framework is applicable to other LLMs. Answers to RQ3: We implement the framework on Llama2-70b-chat (e.g., -Llama2-70b-chat) and compare it with Vitas and Llama2-70b-chat(chatbot). -Llama2-70b-chat has an advantage over Vitas and Llama2-70b-chat(chatbot) in the average semantic state coverage rate. Additionally, increases Llama2-70b-chat's coverage of VPA apps' state space by about 30%. Therefore, the framework is applicable to other LLMs. §.§.§ Study4: Generality In the preceding studies, we evaluate the coverage and efficiency capabilities of on the small scale benchmark. In this study, we use to test 4,000 skills in the large-scale dataset. By comparing its average coverage rate with Vitas in all categories, we evaluate its ability to test skills with various functionalities. As the cove The total coverage is set as the union of the unique coverage achieved by Vitas and . The average semantic state coverage rate with different categories compared with Vitas on the large scale dataset is shown in figure <ref>. The results demonstrate that can achieve over 15% of higher semantic state coverage rate in most categories compared with Vitas. It proves 's ability to test skills with different behavior. is enhanced with LLMs, which are trained on massive amounts of data, enabling their abilities to handle a wide variety of VPA apps. As a comparison, Vitas is designed with fixed patterns to process all types of VPA apps. Consequently, Vitas may lack generality when applied to specific VPA apps. Answers to RQ4: Compared with Vitas, demonstrates a 15% of higher semantic state coverage rate on most categories of skills. The results prove the generality of on testing various VPA apps. § DISCUSSION §.§ 's limitations 's limitations primarily lie in the large language model. Firstly, although the LLMs can achieve good results, their outputs are non-deterministic. Hence, the performance may vary with each test. Secondly, the thinking process of the LLM is not always accurate. As we introduce the chain-of-thought method in the third phase, the LLM will output its thinking process. While chain-of-thought can enhance coverage and efficiency, the thinking process of the LLM is not always right and we cannot confirm whether the LLM is actually thinking as we expected. Lastly, in rare cases, the LLM may not rectify the results even after multiple rounds of feedback prompts. In such instances, we consider that our feedback strategy cannot steer the LLM out of its hallucination and we resort to generate states and input events based on simple rules. § RELATED WORK VPA apps Testing: Several studies have been conducted to test quality, privacy or security related problems on VP apps <cit.>. SkillExplorer <cit.>, VerHealth <cit.> and SkillDetective <cit.> are chat-bot style testers that focuses on detecting skills' privacy violation behavior. SkillExplorer and SkillDetective <cit.> adopt the DFS-based exploration approach. VUI-UPSET <cit.> is a chat-bot style testing approach to generate correct paraphrases while detecting bugs. Vitas <cit.> uses the model-based testing to test VPA apps' problems related to quality, privacy and security. Despite the improvement in coverage and efficiency, it uses simple rules to construct the model and fails to consider the semantic information. SkillScanner <cit.> is the first static analysis method to identify skills' policy violations at the development phase based on a dataset collected from the GitHub. Compared with them, adopts the model-based testing approach to improve the exploration efficiency and introduces to use the LLM to supplement missing semantic information for model construction and exploration. Security and Privacy of VPA apps: Increasing number of research focuses on security and privacy issues of VPA apps <cit.>. Kumar et al. proposes the skill squatting attack <cit.>. Several searches detected the weakness of the automatic speech recognition (ASR) system, which is vulnerable to adversarial sample attacks and out-of-band signal attacks <cit.>. Many efforts have been spent on detecting problematic privacy policies and potential privacy violating behavior <cit.>. Different from them, sought to thoroughly explore the VPA apps' behavior so that sufficient problems can be discovered. Large Language Model for Software Testing: As a booming new technology, Large Language Models are applied to many areas, including software testing. Codet <cit.> uses the LLM to automatically generate test cases for evaluating the quality of a code solution. CodaMosa <cit.> asks Codex to generate test cases when the search based software testing method reaches the bottleneck. TitanFuzz <cit.> uses LLMs to generate and mutate input DL programs for fuzzing DL libraries. Its follow-up work, FuzzGPT <cit.>, primes LLMs to synthesize bug-triggering programs for fuzzing and shows improved bug detecting performance. Other research focused on testing the GUI of mobile apps by generating context-related texts or human-like actions <cit.>. § CONCLUSION In this work, we propose , a LLM driven model-based testing framework for VPA apps. uses the LLM for constructing the behavior model and exploring the state space to compensate for the loss of semantic information. It extracts states from VPA apps' outputs and generates input events to these outputs by providing few-shots to LLMs. The LLM's exploration ability is enhanced by chain-of-thought. Moreover, sets checkers to analyze the LLM's results and uses feedback prompts to ask LLMs for adjustments. Our experiments show that achieves higher coverage than the state-of-the-art tool Vitas and LLMs as chatbots in an efficient manner. tests a large-scale dataset of 4,000 Alexa skills and achieves about 15% of higher coverage rate than Vitas in all categories. IEEEtran
http://arxiv.org/abs/2407.03283v1
20240703170931
From B Specifications to $\{log$\}$ Forgrams
[ "Maximiliano Cristiá" ]
cs.SE
[ "cs.SE" ]
positioning,fit,calc,babel,decorations.text,patterns, shapes,arrows,automata,decorations.pathmorphing, shadows.blur defiDefinicin ejemploEjemplo *teorema*Teorema designations machine[1]    #1 M. Cristi [commandchars= $$] empty Class notes From B Specifications to Forgrams Maximiliano Cristi Computational Science 3 Bachelor in Computer Science Faculty of Science, Technology and Medicine University of Luxembourg Maximiliano Cristi – 2023 – All rights reserved These class notes have been written during a visit to University of Luxembourg from February until June, 2023. empty § WHAT IS ? (`setlog') is a constraint logic programming language. Besides it's a satisfiability solver and as such it can be used as an automated theorem prover. One of 's distinctive features is that sets are first-class entities of the language. was first developed by Gianfranco Rossi and his PhD students in Italy during the mid '90. Since 2012 Gianfranco Rossi and Maximiliano Cristi work together in extending in different directions. As shown below, is at the intersection of several Computer Science areas. can be used as a https://en.wikipedia.org/wiki/Formal_verificationformal verification tool because it performs https://en.wikipedia.org/wiki/Automated_theorem_provingautomated proofs over a very expressive theory. It's also a https://en.wikipedia.org/wiki/Declarative_programmingdeclarative programming language meaning that programmers have to expresses the logic of a computation without describing its control flow. In particular, implements declarative programming as an instance of a https://en.wikipedia.org/wiki/Constraint_logic_programmingconstraint logic programming (CLP) system implemented in https://en.wikipedia.org/wiki/PrologProlog. The code written in is quite similar (in its essence, not in its form) to formal specifications written in languages based on set theory and set relation algebra such as B, https://en.wikipedia.org/wiki/Z_notationZ and https://en.wikipedia.org/wiki/Alloy_(specification_language)Alloy. < g r a p h i c s > §.§ Installation is a Prolog program. Then, you first need to install a Prolog interpreter. So far runs only on SWI-Prolog (<http://www.swi-prolog.org>). After installing SWI-Prolog you must download , all the library files and its user's manual from here: <https://www.clpset.unipr.it/setlog.Home.html> You should also read user's manual: <https://www.clpset.unipr.it/SETLOG/setlog-man.pdf> §.§ Using As we have said, is a satisfiability solver. This means that is a program that determines whether or not a given formula is satisfiable. Once yo access it presents a prompt: log=> You can now ask to solve formulas. For example: log=> un(a,2,B,X,2,c). The atomic predicate means {a,2}∪ B = {X,2,c}, where X and B are variables and a and c are constants. In variables begin with a uppercase letter, and constants begin with lowercase letters. Note that the formula ends with a dot. Hence, when we type in that formula will try to find values for and that satisfy the formula—this is why we say that is a satisfiability solver. So, asks itself, are there values for and that make the formula true? answers the following: B = c, X = a Another solution? (y/n) As you can see, produces a solution and asks whether or not we want to see other solutions. In this case there are three more solutions: B = 2,c, X = a Another solution? (y/n) B = a,c, X = a Another solution? (y/n) B = a,2,c, X = a Another solution? (y/n) no log=> When there are no more solutions or when we don't type in `', says `' and prints the prompt again. Let's try another example. log=> un(a,2,B,X,2,c) c nin B. The atomic predicate means c ∉ B and `' means conjunction (). In this case answers . Why is that? Because there are no values for and that make the formula true. Clearly, as doesn't belong to but at the same time it belongs to the union between that set and the only chance to satisfy the formula is when belongs to . But we rule this possibility out by conjoining . Then, is saying “your formula is unsatisfiable”. Summarizing, if we see anything different from `' we know the formula is satisfiable; otherwise, it's unsatisfiable. § AN EXAMPLE OF A B SPECIFICATION TRANSLATED INTO These class notes are focused in showing how B specifications can be translated into and, later, on how can be used to run simulations and automated proofs. Many B specifications can be easily translated into . This means that can serve as a programming language in which a prototype of a B specification can be immediately implemented. We have already learned to write some B specifications. Here, we will show how these B specifications can be translated into . To that end we will use a running example. Later on we will explain with some detail how B elements not appearing in the example can be translated into ; we will see that some B elements can be translated in more than one way. §.§ The running example The specification to be used as running example is known as the birthday book. It's a system which records people's birthdays, and is able to issue a reminder when the day comes round. The problem is borrowed from <cit.>. §.§ The B specification The B machine containing the specification of the birthday book system will be called BirthdayBook. In our account of the system, we need to deal with people's names and with dates. We also need a type for the messages outputted by some of the operations. Then, we introduce the following types. BirthdayBook NAME; DATE; MSG = {ok, nameExists} ……… Now, we define two state variables for our machine: BirthdayBook NAME; DATE; MSG = {ok, nameExists} known, birthday ……… where known is the set of names with birthdays recorded; and birthday is a function which, when applied to certain names, gives the birthdays associated with them. The invariant of our machine is the following. BirthdayBook NAME; DATE; MSG = {ok, nameExists} known, birthday known ∈NAME birthday ∈NAME DATE known = (birthday) ……… As can be seen, the value of known can be derived from the value of birthday. This makes known a derived component. It would be possible to specify the system without mentioning known at all. However, giving names to important concepts helps to make specifications more readable. The specification doesn't commit the programmer to represent known explicitly in an implementation. Besides the types for the variables are in accordance with the intended use described above. The initial state of the birthday book is the following. BirthdayBook NAME; DATE; MSG = {ok, nameExists} known, birthday known ∈NAME birthday ∈NAME DATE known = (birthday) known, birthday := {}, {} ……… The first operation we specify is how to add a birthday to the birthday book. As we did with the savings account specification we model the normal and abnormal behaviors outputting convenient messages in each case. msg ←addBirthday (name, date) 1    name ∈ NAME date ∈ DATE 1 2 name ∉ known 2 known, birthday, msg := known ∪{name}, birthday ∪{name ↦ date}, ok 2 msg := nameExists 2 1 Note how both state variables are updated accordingly. The second operation to be specified is the one that shows the birthday of a given person. date ←findBirthday (name) 1    name ∈ NAME name ∈ known 1 date := birthday(name) 1 Finally we have an operation listing all the persons whose birthday is a given date. cards ←remind (today) 1    today ∈ DATE 1 cards := (birthday {today}) 1 The complete B specification of the birthday book can be seen in Figure <ref>. §.§ The forgram The forgram resulting from the translation of the B specification must be saved in a file with extension +.pl+ or +.slog+. It is convenient to put this file in the same folder where was installed. A B machine is translated as a collection of clauses and declarations written in a single file. A clause is a sort of subroutine or subprogram or procedure of a regular programming language. Each clause can receive zero or more arguments. In variables must always begin with an uppercase letter or the underscore character (+_+), although this is usually saved for special cases. Any identifier beginning with a lowercase letter is a constant. Then, for instance, the state variables of the birthday book will be +Known+ and +Birthday+, instead of +known+ and +birthday+ because in this case they would be constants. We'll see how variables are typed in Section <ref>. For now we'll not pay much attention to types. §.§.§ Translating the section In general, the sections is not translated into . The sets declared in this section can be freely introduced in . We'll see more on this in Section <ref>. §.§.§ Translating the section The section is translated as a declaration as follows: variables([Known, Birthday]). Note that declarations end with a dot (`'). §.§.§ Translating the section Before translating the invariant we normalize it: known ∈ NAME birthday ∈ NAME DATE 3 pfun(birthday) known = (birthday) The first part of the invariant (known ∈ NAME birthday ∈ NAME DATE) is translated as type declarations, whereas the second part is translated as a clause declared as invariant. Type declarations will be introduced in Section <ref>. The code is the following: invariant(birthdayBookInv). birthdayBookInv(Known,Birthday) :- dom(Birthday,Known) pfun(Birthday). Then, the first line declares the clause named to be an invariant. The second line is a clause. Clauses are of the form: head(params) :- body. where is a formula. In this case the formula is simply which is equivalent to known = (birthday) pfun(birthday). Alternatively, you can split the invariant in smaller pieces. Actually, each conjunct in the section may become an invariant. This strategy is a good option when the specification is large and complex because later it will be easier for to discharge invariance lemmas. In this case the code look like this: invariant(birthdayBookInv). birthdayBookInv(Known,Birthday) :- dom(Birthday,Known). invariant(pfunInv). pfunInv(Birthday) :- pfun(Birthday). Note that declarations and clauses end with a dot (`'). §.§.§ Translating the section The section is translated as a declaration and a clause as follows: initial(birthdayBookInit). birthdayBookInit(Known,Birthday) :- Known = Birthday = . That is, we first declare that the clause corresponds to the initial state of the system and then the clause is defined. Here there's an important difference w.r.t. the B specification because the body of the clause is a formula and not a multiple assignment. Indeed, and are predicates. We could have written them also as and because the symbol `' is simply logical equality. In turn `' means conjunction (). Hence, we could have written as follows: birthdayBookInit(Known,Birthday) :- = Birthday = Known. In any case, the implementation of the section follows the semantics of the B specification. §.§.§ Translating operations A B operation is translated as a clause and a declaration indicating that the clause is an operation. When a B operation is translated, the corresponding clause receives as arguments all the state variables, all the input parameters and all the output parameters. Besides, for each state variable v the clause will also receive v_, which represents the value of v in the next state. That is, in we have to represent the next state explicitly with a second set of variables. Hence, the head of the clause corresponding to the B operation named addBirthday is the following: addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) where +Name+ and +Date+ correspond to input parameters name and date declared in addBirthday; +Known+ and +Birthday+ represent the before state while +Known_+ and +Birthday_+ represent the after state; and corresponds to the output parameter. Now we give the complete specification of the clause preceded by its declarion: operation(addBirthday). addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) :- (Name nin Known un(Known,Name,Known_) un(Birthday,[Name,Date],Birthday_) Msg = ok or Name in Known Known_ = Known Birthday_ = Birthday Msg = nameExists ). That is, the first line declares that is an operation. Then, the --statement in addBirthday is translated as a logical disjunction (`'). The condition of the conditional statement, name ∉ known, is translated as . The word `' in means ∉. If the condition is true the branch specifies the multi assignment: known, birthday, msg := known ∪{name}, birthday ∪{name ↦ date}, ok This multi assignment is translated as a conjunction of constraints: un(Known,Name,Known_) un(Birthday,[Name,Date],Birthday_) Msg = ok The meaning of these constraints is as follows: * +un(Known,Name_i,Known_)+ means Known_ = Known ∪{Name}. That is, in is equivalent to C = A ∪ B. * Similarly, is Birthday_ = Birthday ∪{Name ↦ Date}. That is, in the ordered pair x ↦ y is written as . When the condition of the --statement is false, we have the assignment msg := nameExists. This means that the state of the machine doesn't change and that the machine outputs nameExists. In we first need to write the negation of the condition, that is or . Then, we must say that the machine doesn't change the state and that nameExists is outputted. We do this with the conjunction: Known_ = Known Birthday_ = Birthday Msg = nameExists As and represent the next state, the equalities and mean that the state doesn't change. Finally, observe that the section hasn't been translated. In this case the section contains only type declarations (name ∈ NAME date ∈ DATE). The translation of type declarations will be seen in Section <ref>. Now we give the translation of findBirthday. operation(findBirthday). findBirthday(Known,Birthday,Name,Date,Known,Birthday) :- Name in Known applyTo(Birthday,Name,Date). where +applyTo+ is a predicate implementing function application. That is, +applyTo(F,X,Y)+ is true if and only if F(X) = Y holds. Note that +applyTo(F,X,Y)+ makes sense only if is in the domain of +F+, which in turn is a function at least on . As with addBirthday the type declaration name ∈ NAME isn't included in the body of the clause. Besides, note how we say that the operation doesn't change the state. Instead of including in the body of the clause we don't include and in the head but two copies of the before-state variables. This is interpreted by as the operation not changing the state. We couldn't do this in addBirthday because there's one branch of that operation that changes the state. Finally, the translation of remind is the following: operation(remind). remind(Known,Birthday,Today,Cards,Known,Birthday) :- rres(Birthday,Today,M) dom(M,Cards). This is an interesting example because it shows how set and relational expressions must be translated. Given that in set and relational operators are implemented as predicates, it's impossible to write set and relational expressions. Instead, we have to introduce new variables (such as +M+) to “chain” the predicates. Predicate +rres(R,A,S)+ stands for S = R A. Then, the body of the clause corresponds to the following B predicate: m = birthday {today} cards = dom(m). As remind doesn't change the state we repeat the state variables in the head of the clause. § TYPES IN So far we haven't given the types of the variables. provides a typechecker that can be activated and deactivated by the user. 's type system is described in detail in chapter 9 of user's manual. Here we will give a broad description of how to use types in . 's type system allows users to define type synonyms to simplify the type declaration of clauses and variables. For example, we can define the following type synonyms for the birthday book: def_type(bb,rel(name,date)). def_type(kn,set(name)). def_type(msg,enum([ok,nameExists])). where +bb+ is a type identifier o synonym of the type +rel(name,date)+. In +rel(name,date)+, +name+ and +date+ correspond to the basic types NAME and DATE of the B specification. B basic types can be introduced in without any previous declaration. In basic types must begin with a lowercase letter (i.e. they are constants). In turn, +rel(name,date)+ corresponds to the type of all binary relations between +name+ and +date+. That is, +rel(name,date)+ corresponds to NAME DATE in B. corresponds to NAME in B and corresponds to the set {ok, nameExists} which we named MSG in the B specification. These type synonyms allow us to declare the type of the +addBirthday+ operation: dec_p_type(addBirthday(kn,bb,name,date,kn,bb,msg)). The type declaration must come before the clause definition: operation(addBirthday). dec_p_type(addBirthday(kn,bb,name,date,kn,bb,msg)). addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) :- (Name nin Known ... The +dec_p_type+ declaration has only one argument of the following form: clause_name(parameters) In turn, +parameters+ is a list whose elements corresponds one-to-one to the clause arguments. Then, the type of +Known+ is +kn+, the type of +Birthday+ is +bb+, etc. The following is the typed version of the +remid+ operation. operation(remind). dec_p_type(remind(kn,bb,date,kn,kn,bb)). remind(Known,Birthday,Today,Cards,Known,Birthday) :- rres(Birthday,Today,M) dom(M,Cards) dec(M,bb). This clause is interesting because it shows how variables local to the clause are typed by means of the +dec(V,t)+ predicate. Indeed, +dec(V,t)+ is interpreted as “variable +V+ is of tye +t+”. The forgram including type declarations of the complete translation of the birthday book can be found in Appendix <ref>. As can be seen in that appendix, all the clauses, including and , are typed. Recall that partial functions aren't a type in B. The same happens in ; in fact it is impossible to define the type of all partial functions. The natural numbers are another example of a set that isn't a type. This means that if in B we have f ∈ X Y in we declare +F+ to be of type +rel(x,y)+ and then we should prove that +F+ is a function as an invariant. Likewise, if in B we declare x ∈, in we must declare +X+ to be of type +int+ and then prove that +0 =< X+ is an invariant. In general, when a B specification is translated into it is convenient to first normalize the B specification and then start the translation into . In this case B types are translated straightforwardly and the predicates introduced due to the normalization process become constraints at the level (i.e. +0 =< X+) or they are proved to be invariants. For instance, x ∈ is a non-normalized declaration because isn't a type (it's a set). The normalized declaration is x ∈ plus x ≥ 0 conjoined in the section or in the section of an operation. In this case, in the type of x is +int+ and we should prove that x is always greater than or equal to zero (i.e., that 0 ≤ x is an invariant), or simply assert that as a precondition. § TRANSLATING B SPECIFICATIONS INTO In this section we show how the most used elements appearing in B specification are translated into . §.§ Translating arithmetic expressions Almost all Z arithmetic expressions are translated directly into , with some exceptions. The relational symbols ≤, ≥ and ≠ are translated as +=<+, +>=+ and +neq+, respectively. The arithmetic operators are the usual ones: .+., .-., .*., .div. y .mod.. An equality of the form x' = x + 1 is translated as !X_ is X + 1! (that is, in arithmetic equalities you mustn't use `+=+' but `+is+'). Furthermore, if in B we have A = {x, y - 4} (A, x and y variables) it has to be encoded as: +A = X,Z Z is Y - 4+, where +Z+ is a variable not used in the clause. The problem is that doesn't evaluate arithmetic expressions unless the programmer forces it by using the +is+ operator. This means that if in we run +X,Y - 4 = Y - 3 - 1,X+, the answer will be +no+ because will try to find out whether or not +Y - 4 = Y - 3 - 1+ without evaluating the expressions (that is, it will consider them, basically, as character strings where +Y+ is an integer variable and thus it is impossible for the equality to hold regardless of the value of +Y+). On the contrary, if we run +X,A = B,X A is Y - 4 B is Y - 3 - 1+ will return several solutions (with some repetitions), meaning that the sets are equal in several ways. The same applies to the +neq+ predicate: for +Y - 4 neq Y - 3 - 1+ is true. As a consequence we must write: +H is Y - 4 U is Y - 3 - 1 H neq U+. However, this is not necessary with the order predicates: .X + 1 > X. is satisfiable but +X - 1 > X+ isn't. §.§ Translating ordered pairs Ordered pairs are encoded as Prolog lists of two elements. For instance, if x is a variable (x,3) or x ↦ 3 is translated as +[X,3]+. If in B we have p ∈ X Y then the type declaration for p is , where corresponds to the encoding of type X in ; similarly for . §.§ Translating sets §.§.§ Extensional sets — Introduction to set unification In the empty set is written as in B, . The set {1,2,3} is simply translated as +1,2,3+. If one of the elements of the set is a variable or an element of an enumerated type, take care of the differences concerning variables and constants in B and . For example, if in B x is a variable, then the set {2,x,6} is translated as +2,X,6+; and if in B Run is an element of a set declared in the section, then the set {2,Run,6} is translated as +2,run,6+. However, provides a form of extensional sets that, in a sense, is more powerful than the one offered in B. The term +.../...+ is called extensional set constructor. In +E/C+ the second argument (i.e. +C+) must be a set. +E/C+ means {E}∪ C. Then, there are solutions where E ∈ C. To avoid such solutions (in case they're incorrect or unwanted) the predicate E ∉ C must be explicitly added to the formula. In order to make the language more simple, accepts and prints terms such as +1,2 / X+ instead of +1 / 2 / X+. The extensional set constructor is useful and in general it's more efficient than other encodings. For example, the B assignment (assume d is a variable): A := A ∖{d} can be translated by means of the predicate +diff+, whose semantics is equivalent to ∖ (see Table <ref>): diff(A,D,A_) Bu it also can be translated by means of an extensional set: A = D / A_ D nin A_ or D nin A A_ = A which in general is more efficient. That is, the predicate +A = D / A_+ unifies +A+ with +D / A_+ in such a way that it finds values for the variables to make the equality true. If such values don't exist the unification fails and tries the second disjunct. Why we conjoined +D nin A_+? Simply because, for instance, +A = 1,2+, +D = 1+ and +A_ = 1,2+ is a solution of the equation but it isn't a solution of A := A ∖{d}. Precisely, when +D nin A_+ is conjoined all the solutions where +D+ belongs to +A_+ are eliminated. solves equalities of the form +B = C+, where +B+ and +C+ are terms denoting sets, by using set unification. Se unification is at the base of the deductive power of making it an important extension of Prolog's unification algorithm. Set unification is inherently computationally hard because finding out whether or not two sets are equal implies, in the worst case, computing all the permutations of their elements. On top of that, it is the fact that can deal with partially specified sets, that is sets where some of their elements or part of the set are variables. For these reasons, in general, will show efficiency problems when dealing with certain formulas but, at the same time, we aren't aware of other tools capable to solve some of the problems can. §.§.§ Cartesian products In Cartesian products are written +cp(A,B)+ where +A+ and +B+ can be variables, extensional sets and Cartesian products. §.§.§ Integer intervals A B integer interval such as m n is translated as +int(m,n)+. and can be integer constants or variables. If we need to write something like m+1 2*n+3 we do as follows: , where and must be new variables. §.§ Translating set and relational operators Set, relational, functional and sequence operators are translated as shown in Tables <ref>, <ref> and <ref>. In order to be able to work with the sequence operators shown in Table <ref> load the corresponding library file (e.g. +consult('setlogliblist.slog')+) into the environment. The cardinality operator accepts as second argument only a constant or a variable. Hence, if we run !size(A,X + 1)! answers +no+; instead if we run !size(A,Y) Y is X + 1! (+Y+ must be a variable not used in the clause) the answer is +true+ because the formula is satisfiable. will answer +no+ if we execute !size(A,Y) Y = X + 1!. §.§ Translating function application One interesting application of set unification is the application of a function to its argument. Given that partial functions are frequently used in B it's necessary to add predicates of the form x ∈ f, before attempting to apply f to x. The translation of these formulas into can be done by using the predicate +applyTo+ or by using a set membership predicate which leads to set unification. For example the B formula: x ∈f f(x) = y can be translated in a direct fashion: dom(F,D) X in D applyTo(F,X,Y) or just using : applyTo(F,X,Y) or using set unification (if we assume that is a function): F = [X,Y] / G [X,Y] nin G The definition of is the following: applyTo(F,X,Y) :- F = [X,Y] / G [X,Y] nin G comp([X,X],G,). If we know that x ∈ f the there exist +Y+ and +G+ such that +F = [X,Y] /+ +G [X,Y] nin G+. Besides, if we are saying that we can apply f to x is because there is one and only one ordered pair in f whose first component is x. Note that we aren't saying that f is a function, we're just saying that f is locally a function on x (it might well be a function in other points of its domain but we don't know that yet). Saying that in f there is exactly one ordered pair whose first component is x is the same than saying that there are no ordered pairs in +G+ whose first component is x. We say this by using the composition operator defined over binary relations, namely +comp+ (see Table <ref>): +comp([X,X],G,)+. Indeed, this predicate says that when +[X,X]+ is composed with +G+ the result is the empty set. This can happen for two reasons: +G+ is the empty binary relation, in which case it's obvious that there are no ordered pairs with first component +X+; or +G+ is non-empty but no pair in it composes with +[X,X]+, which is equivalent to say that +X+ does not belong to the domain of +G+. We could have said the same by stating that +dom(G,D) X nin D+ but this is usually less efficient because it requires to compute the domain of +G+. Therefore, implies that belongs to the domain of . If this is not the case then fails. Then, if we have to translate x ∈ f f(x) = y it's enough to state . However, if in B we have that f ∈ T U is part of the invariant, then x ∈(f) f(x) = y will be defined due to the invariant. That is, f(x) will be a unique value. This means that encoding it as is too much because asserts that is locally a function on . Hence, in this case, a more precise encoding is the one based on set unification: F = [X,Y] / G [X,Y] nin G Note that this encoding implies that belongs to the domain of (otherwise it will fail as ). More importantly, this encoding is saying that all we have to do to find the image of under is to walk through looking for the ordered pair whose first component is . On the other hand, the encoding based on is saying that once we have found in we have to keep walking through it to check that there's no other pair whose first component is . This last check required by is redundant if we know that is a function. If we have proved that f ∈ T U is an invariant then we know for sure that f is a function. Observe that in the translation of findBirthday we have used +applyTo+ which, after the above analysis, is not the best choice because pfun(birthday) is intended to be an invariant of the specification. We should replace +applyTo+ by the encoding based on set unification. We didn't do it in that way because we think that it requires a rather complex explanation when we were just introducing . §.§ Translating logical operators Logical conjunction (+ +), disjunction (+or+), implication (+implies+) and negation (+neg+) are among the available logical connectives in (see Section 3.3 of the manual[<https://www.clpset.unipr.it/SETLOG/manual_4_9_8.pdf>] for the complete list). Logical negation (+neg+) must be used with care because, as the manual explains in Section 3.3, it doesn't work well in all cases. In general, +neg+ works as expected when the formula to be negated doesn't contain existential variables inside it. For instance, the following formula states that +Min+ is the minimum element in +S+: Min in S subset(S,int(Min,Max)) +neg+ won't work correctly for this formula because +Max+ is an existential variable inside the formula. In order to see that +Max+ is an existential variable inside the formula, we can write it as the body of a clause computing the minimum element of a set: min(S,Min) :- Min in S subset(S,int(Min,Max)). Now it's clear that +Max+ is an existential variable inside the formula because it's not an argument of the clause head. Hence, +neg+ won't work well for +min+. More precisely, if we define the clause +n_min+ as follows: n_min(S,Min) :- neg(Min in S subset(S,int(Min,Max))). it doesn't correspond to ¬ because +neg+ won't compute the (correct) negation of its argument as it contains +Max+. +neg+ will compute some formula but not the negation we're expecting. On the other hand, provides the negation for all its atomic constraints (Tables <ref>-<ref> and all the arithmetic constraints). works correctly for all of them. For example, if we want to translate ¬ x ∈ A we can write in +neg(X in A)+ or just +X nin A+. In the same way, ¬ A = b can be translated as +neg(A neq b)+ or as +A neq b+. For instance, the B predicate A ⊈B is translated as +nsubset(A,B)+; and ¬ a ≤ y as +neg(A =< Y)+. Tables <ref>-<ref> include the negation for every set theoretic operator. As an example of using +neg+, the following B statement: x ∈(f) 0 < xf, msg := {x} f, okmsg := error can be translated as follows: dom(F,D) (X in D 0 < X dares(X,F,F_) Msg = ok or neg(X in D 0 < X) Sa_ = Sa Msg = error ) Note that +dom(F,D)+ is placed outside the disjunction because the constraint is used to name the domain of . Observe that isn't present in the B statement; it has to be introduced in to name the expression (f). +dom(F,D)+ states that +D+ is the (name of the) domain of +Sa+: it makes no sense to negate this because we're defining +D+ as such. This situation arises frequently when a B specification is translated into due to the fact that B uses expressions for what in is written with predicates. §.§.§ Quantifiers In general existential quantifiers need not to be translated because semantics is based on existentially quantifying all variables of any given program. For example, if in B we have: ∃ x . (x ∈ x ∈ A) it can be translated as: 0 =< X X in A because the semantics of the program is, essentially, an existential quantifier over both variables. Things are different when dealing with universal quantifiers. In we only have so-called restricted universal quantifiers (RUQ). A RUQ is a formula of the following form: ∀ x ∈ A : P(x) whose semantics is: ∀ x . (x ∈ A P(x)) which, as can be seen, coincides with the universally quantified predicates available en B. In the simplest RUQ are encoded as follows: foreach(X in A,P(X)) There are more complex and expressive RUQ available in [Have a look at chapter 6 of user's manual and then ask for help to the instructor.]. Recall that a proper use of the B language tends to avoid most of the quantified formulas. §.§ Translating As is not a type and, at the same time, is an interpreted set, we must be careful when translating into . A type declaration such as x ∈ is equivalent to x ∈ 0 ≤ x. As we have said, x ∈ is encoded in terms of the type system defined in , whereas 0 ≤ x is simply encoded as . On the other hand, A ⊆ or A ∈ are translated with a RUQ: foreach(X in A, 0 =< X) In particular a type declaration such as f ∈ T is encoded in as follows: pfun(F) foreach([X,Y] in F, 0 =< Y) plus a type declaration for f such as , assuming T is a basic type. § RUNNING FORGRAMS forgrams usually won't meet the typical performance requirements demanded by users. Forgrams are slower than programs but they have computational properties that programs don't. Hence, we see a forgram of a B specification more as a prototype than as a final program. On the other hand, given the similarities between a B specification and the corresponding forgram, it's reasonable to think that the prototype is a correct implementation of the specification[In fact, the translation process can be automated in many cases.]. Then, we can use these prototypes to make an early validation of the requirements. Validating user requirements by means of prototypes entails executing the prototypes together with the users so they can agree or disagree with the behavior of the prototypes. This early validation will detect many errors, ambiguities and incompleteness present in the requirements and possible misunderstandings or misinterpretations caused by the software engineers. Without this validation many of these issues would be detected in later stages of the project thus increasing the project costs. Think that if one of these issues is detected once the product has been put in the market, it implies to correct the error in the requirements document, the specification, the design, the implementation, the user documentation, etc. Since we see forgrams as prototypes we talk about simulations or animations rather than executions when speaking about running them. However, technically, what we do is no more than executing a piece of code. The word simulation is usually used in the context of models (e.g. modeling and simulation). In a sense, our forgrams are executable models of the user requirements. On the other hand, the word animation is usually used in the context of formal specifications. In this sense, the implementation of a B specification can be seen as an executable specification. In fact, as we will see, forgrams have features and properties usually enjoyed by specifications and models, which are rare or nonexistent in programs written in imperative (and even functional) programming languages. Be it execution, simulation or animation the basic idea is to provide inputs to the forgram, model or specification and observe the produced outputs or effects. Besides, we will show that offers more possibilities beyond this basic idea. §.§ Basic simulations Let's see an example of a simulation on a forgram. Assume the forgram of the birthday book is saved in a file named +bb.pl+. We start by executing the Prolog interpreter from a command line terminal and from the folder where was installed[The name of the Prolog executable may vary depending on the interpreter and the operating system. The example corresponds to a Ubuntu Linux machine and SWI-Prolog.]. The meaning of the above code is the following: * The Prolog interpreter is executed. * The interpreter is loaded. * The interpreter is accessed. * The birthday book prototype is loaded. * The simulation is run: birthdayBookInit(K,B) addBirthday(K,B,maxi,160367,K_,B_,M). consisting of: * +birthdayBookInit+ is called passing to it any two variables as arguments; * +addBirthday+ is called passing to it in the first and second arguments the same variables used to call +birthdayBookInit+; as the third and fourth arguments two constants; and three new variables in the last three arguments. Observe that the simulation ends in a dot. * shows the result of the simulation. * asks if we want to see other solutions and we answer yes. * says there are no more solutions. Let's see the simulation in detail: birthdayBookInit(K,B) addBirthday(K,B,maxi,160367,K_,B_,M). When we call +birthdayBookInit(K,B)+, +K+ and +B+ unify with +Known+ and +Birthday+ which are the formal arguments used in the definition of +birthdayBookInit+ (see the complete code in Appendix <ref>). This implies that +K+ is equal to +Known+ and +B+ is equal to +Birthday+ which in turn implies that +K+ and +B+ are equal to ++. This is exactly the first line of the answer returned by . Hence, when +addBirthday(K,B,maxi,160367,K_,B_,M)+ is called, it's like we were calling: addBirthday(,,maxi,160367,K_,B_,M) Calling +addBirthday+ makes to execute each branch of the disjunction present in the body of the clause. That is, both branches are tried in the order they're written. Then, unification goes as follows: Known = Birthday = Name_i = maxi Date_i = 160367 K_ = Known_ B_ = Birthday_ M = Msg Hence the code in the first branch is instantiated as follows: maxi nin un(,maxi,K_) un(,[maxi,160367],B_) M = ok which reduces to: K_ = maxi B_ = [maxi,160367] M = ok which corresponds to the second line of the answer returned by . When '+y+' is pressed executes the second branch. Again, unification takes place and a new series of equations are produced: Known = Birthday = Name = maxi K_ = Known B_ = Birthday M = Msg which implies that +K+ unifies with ++. Then, the code in the second branch is instantiated as follows: maxi in ... As this predicate is obviously false, the invocation of this branch fails and hence produces no solution. As a consequence answers +no+ after we press '+y+'. The following simulation is longer and includes the previous one. birthdayBookInit(K,B) addBirthday(K,B,maxi,160367,K1,B1,M1) addBirthday(K1,B1,'Yo',201166,K2,B2,M2) findBirthday(K2,B2,'Yo',C,K3,B3) addBirthday(K3,B3,'Otro',201166,K4,B4,M4) remind(K4,B4,160367,Card,K5,B5) remind(K5,B5,201166,Card1,K_,B_). Here we can see that we're calling all the operations defined in the prototype; that we use different variables to chain the state transitions; and that it's possible to use constants beginning with an uppercase letter as long as we enclose them between single quotation marks. The first solution returned by that simulation is the following: K = , B = , K1 = maxi, B1 = [maxi,160367], M1 = ok, K2 = maxi,Yo, B2 = [maxi,160367],[Yo,201166], M2 = ok, C = 201166, K3 = maxi,Yo, B3 = [maxi,160367],[Yo,201166], K4 = maxi,Yo,Otro, B4 = [maxi,160367],[Yo,201166],[Otro,201166], M4 = ok, Card = maxi, K5 = maxi,Yo,Otro, B5 = [maxi,160367],[Yo,201166],[Otro,201166], Card1 = Yo,Otro, K_ = maxi,Yo,Otro, B_ = [maxi,160367],[Yo,201166],[Otro,201166] where we can see that gives us the chance to have a complete trace of the forgram execution. Note also that eliminates the single quotation marks we used to enclose some constants. It's important to remark that the variables used to chain the state transitions (i.e. +K1+, +B1+, …, +K5+, +B5+) must be all different. If done otherwise, the simulation might be incorrect. For instance: birthdayBookInit(K,B) addBirthday(K,B,N,C,K,B,M). will fail as the values of +K+ and +B+ before invoking +addBirthday+ can't unify with the values returned by it. In other words, the +K+ and +B+ as the first two arguments of +addBirthday+ can't have the same value than the +K+ and +B+ used towards the end of the call. We could use the same variable for the before and after state of query state operations (for instance when we invoke +findBirthday+ and +remid+). So far the two simulations we have performed start in the initial state. It's quite simple to start a simulation from any state: K = maxi,caro,cami,alvaro B = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400] addBirthday(K,B,'Yo',160367,K1,B1,M1) remind(K1,B1,160367,Card,K1,B1). where we can see that we use the same variable to indicate the before and after state of +remid+ (because we know this clause produces no state change). In this case the answer is: K = maxi,caro,cami,alvaro, B = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400], K1 = maxi,caro,cami,alvaro,Yo, B1 = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400],[Yo,160367], M1 = ok, Card = maxi,Yo A potential problem of manually defining the initial state for a simulation is that this state, due to human error, might not verify the state invariant. Nevertheless, it's very easy to avoid this problem as we will see in Section <ref>. §.§.§ Hiding the complete trace of the execution If we don't need the complete execution trace of a simulation but only the its final state and outputs we can define a clause for the simulation whose arguments are the variables we are interested in: sim(K_,B_,C,Card,Card1) :- birthdayBookInit(K,B) addBirthday(K,B,maxi,160367,K1,B1,M1) addBirthday(K1,B1,'Yo',201166,K2,B2,_) findBirthday(K2,B2,'Yo',C,K3,B3) addBirthday(K3,B3,'Otro',201166,K4,B4,_) remind(K4,B4,160367,Card,K5,B5) remind(K5,B5,201166,Card1,K_,B_). And then we call the clause: log=> sim(K_,B_,C,Card,Card1). K_ = maxi,Yo,Otro, B_ = [maxi,160367],[Yo,201166],[Otro,201166], C = 201166, Card = maxi, Card1 = Yo,Otro As can be seen, we get a more compact output showing only the variables we are interested in. §.§ Type checking and simulations So far we haven't really used 's typechecker. Actually when we consulted +bb.pl+ the types weren't checked. In other words ignored the +dec_p_type+ assertions included in +bb.pl+. This means that possible type errors weren't detected by . In this sense executed all the simulations in untyped mode. In this section we'll see how to call the typechecker and how this affects simulations. Recall reading chapter 9 of user's manual for further details on 's types. Type checking can be activated by means of the +type_check+ command which should be issued before the file is consulted. In this way, when executes +consult+ it invokes the typechecker and if there are type errors we'll see an error message. Type checking can be deactivated at any time by means of command +notype_check+. When the typechecker is active all simulations must be correctly typed because otherwise will just print a type error. log=> birthdayBookInit(K,B) addBirthday(K,B,maxi,160367,K_,B_,M). ***ERROR***: type error: variable K has no type declaration Then, we have to declare the type of all variables: log=> birthdayBookInit(K,B) addBirthday(K,B,name:maxi,date:160367,K_,B_,M) dec([K,K_],kn) dec([B,B_],bb) dec(M,msg). K = , B = , K_ = name:maxi, B_ = [name:maxi,date:160367], M = ok If the user wants to typecheck the program, for instance +bb.pl+, but (s)he doesn't want to deal with types when running simulations, the typechecker can be deactivated right after consulting the program. In this way will check the types of the program but it then will accept untyped simulations. Clearly, in general, working with untyped simulations is easier but more dangerous because we could call the program with ill-typed inputs thus causing false failures. In the rest of this section we'll work with untyped simulations. This means that the user must ensure that typechecking is deactivated (command +notype_check+). §.§ Simulations using integer numbers As we have said, is, essentially, a set solver. However, it's also capable of solving formulas containing predicates over the integer numbers. In that regard, uses two external solvers known as CLP(FD)[<https://www.swi-prolog.org/pldoc/man?section=clpfd-predicate-index>] and CLP(Q)[<https://www.swi-prolog.org/pldoc/man?section=clpqr>]. Each of them has its advantages and disadvantages. By default uses CLP(Q). Users can change to CLP(FD) by means of command +int_solv+-+er(clpfd)+ and can come back to CLP(Q) by means of +int_solver(clpq)+. Generally speaking, it's more convenient to run simulations when CLP(FD) is active because it tends to generate more concrete solutions. In particular CLP(FD) is capable of performing labeling over the integer numbers which allows users to go through the solutions interactively. Labeling works if at least some of the integer variables are bound to a finite domain. Variable +N+ is bound to the finite domain +int(a,b)+ ( and +b+ integer numbers) if +N in int(a,b)+ is in the formula. See chapter 7 of user's manual for more details. For example, if CLP(Q) is active, the answer to the following goal: is exactly the same formula. That is, is telling us that the formula is satisfiable but we don't have one of its solutions. If we activate CLP(FD): prints a warning message and the same formula: This means that the formula might be satisfiable but CLP(FD) isn't sure. If we want a more reliable answer we have to bound +Turn+ or +N+ to a finite domain: in which case the first solution is: and we can get more solutions interactively. On the contrary, if we activate CLP(Q) the finite domain doesn't quite help to get a concrete solution: On the other hand, CLP(Q) is complete for linear integer arithmetic while CLP(FD) isn't. This means that if we want to use to automatically prove a property of the program for all the integer numbers, we must use CLP(Q)[As in general non-linear arithmetic is undecidable it's quite difficult to build a tool capable of automatically proving program properties involving non-linear arithmetic.]. Given that simulations don't prove properties it's reasonable to use CLP(FD). §.§ Symbolic simulations The symbolic execution of a program means to execute it providing to it variables as inputs instead of constants. This means that the execution engine should be able to symbolically operate with variables in order to compute program states as the execution moves forward. As a symbolic execution operates with variables, it can show more general properties of the program than when this is run with constants as input. is able to symbolically execute forgrams, within certain limits. These limits are given by set theory and non-recursive clauses. The following are the conditions under which can perform symbolic executions[This is an informal description and not entirely accurate of the conditions for being able to perform symbolic executions. These conditions are more or less complex and quite technical. The forgrams that can't be symbolically simulated and don't verify the following conditions will not appear in this course.]: * Recursive clauses are not allowed. * Only the operators of Tables <ref> and <ref> are allowed. If the forgrams uses the cardinality operator (+size+), the program can't use the operators of Table <ref>. The +size+ operator is complete only when combined with the operators of Table <ref>. * All the arithmetic formulas are linear[More precisely, all the integer expressions must be sums or subtractions of terms of the form +x*y+ with +x+ or +y+ constants. All arithmetic relational operators are allowed, even +neq+.]. This means the code can't use operators of Table <ref> if symbolic executions are to be done[The problem with the operators of Table <ref> is that they depend on certain aspects of set theory that aren't fully implemented in , yet.]. Actually, many symbolic executions are still possible even if the above conditions aren't met. The forgram of the birthday book falls within the limits of what can symbolically execute. For example, starting from the initial state we can call +addBirthday+ using just variables: birthdayBookInit(K,B) addBirthday(K,B,N,C,K_,B_,M). in which case answers: K = , B = , K_ = N, B_ = [N,C], M = ok which is a representation of the expected results. Now we can chain a second invocation to +addBirthday+ using other input variables: birthdayBookInit(K,B) addBirthday(K,B,N1,C1,K1,B1,M1) addBirthday(K1,B1,N2,C2,K_,B_,M2). in which case the first solution returned by is: K = , B = , K1 = N1, B1 = [N1,C1], M1 = ok, K_ = N1,N2, B_ = [N1,C1],[N2,C2], M2 = ok Constraint: N1 neq N2 As can be seen, the answer includes the section which has never appeared before. Indeed, the most general solution that can be returned by consists of two parts: a (possibly empty) list of equalities between variables and terms (or expressions); and a (possibly empty) list of constraints. Each constraint is a predicate; the returned constraints appear after the word +Constraint+. The conjunction of all these constraints is always satisfiable (in general the solution is obtained by substituting the variables of type set by the empty set). In this example, clearly, the second invocation to +addBirthday+ can add the pair +[N2,C2]+ to the birthday book if and only if +N2 nin N1+, which holds if and only if +N2+ is different from +N1+. returns a second solution to this symbolic execution: K = , B = , K1 = N1, B1 = [N1,C1], M1 = ok, N2 = N1, K_ = N1, B_ = [N1,C1], M2 = nameExists produced after considering that +N1+ and +N2+ are equal in which case the second invocation to +addBirthday+ goes through the branch and so +K_+ and +B_+ are equal to +K1+ and +B1+, which is the expected result as well. Clearly, symbolic executions allows us to draw more general conclusions about the behavior of the prototype. The next example illustrates this: birthdayBookInit(K,B) addBirthday(K,B,N1,C1,K1,B1,M1) addBirthday(K1,B1,N2,C2,K2,B2,M2) findBirthday(K2,B2,W,X,K2,B2). will consider several particular cases depending on whether +N2+, +N1+ and +W+ are equal or not. For example, the following are the first three solutions returned by : K = , B = , K1 = N1, B1 = [N1,C1], M1 = ok, K2 = N1,N2, B2 = [N1,C1],[N2,C2], M2 = ok, W = N1, X = C1 Constraint: N1 neq N2 Another solution? (y/n) K = , B = , K1 = N1, B1 = [N1,C1], M1 = ok, K2 = N1,N2, B2 = [N1,C1],[N2,C2], M2 = ok, W = N1, X = C1 Constraint: C1 neq C2, N1 neq N2 Another solution? (y/n) K = , B = , K1 = N1, B1 = [N1,C1], M1 = ok, K2 = N1,N2, B2 = [N1,C1],[N2,C2], M2 = ok, W = N2, X = C2 Constraint: N1 neq N2 In the first case +W = N1+ is considered and so +X+ must be equal to +C1+; the second case is similar to the first one; and in the third +W = N2+ and so +X+ is equal to . returns more solutions some of which are repeated. Obviously symbolic simulations may combine variables with constants. In general the less the variables we use the less the number of solutions. §.§ Inverse simulations Normally, in a simulation the user provides inputs and the forgram returns the outputs. There are situations in which is interesting to get the inputs from the outputs. This means a sort of an inverse simulation. is able to perform inverse executions within the same limits in which it is able to perform symbolic executions. In fact, a careful reading of the previous section reveals that doesn't really distinguish input from output variables, nor between before and after states. As a consequence, for is more or less the same to execute a forgram by providing values for the input variables or for the output variables; in fact, is able to execute a forgram just with variables. Let's see a very simple inverse simulation where we only give the after state: K_ = maxi,caro,cami,alvaro B_ = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400] addBirthday(K,B,N,C,K_,B_,M). The first solution returned by is the following: K_ = maxi,caro,cami,alvaro, B_ = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400], K = maxi,caro,cami, B = [maxi,160367],[caro,201166],[cami,290697], N = alvaro, C = 110400, M = ok When the B specification is deterministic, the corresponding forgram will be deterministic as well. Therefore, for any given input there will be only one solution. However, the inverse simulation of a deterministic forgram may generate a number of solutions. This is the case with the above simulation. The first solution computed by considers the case where +N = alvaro+ and +C = 110400+, but this isn't the only possibility. Going forwards with the solutions we get, for instance, the following: K_ = maxi,caro,cami,alvaro, B_ = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400], K = maxi,caro,alvaro, B = [maxi,160367],[caro,201166],[alvaro,110400], N = cami, C = 290697, M = ok which means that +K_+ and +B_+ may have been generated by starting from some +K+ and +B+ where +cami+'s birthday isn't in the book and so we can add it. §.§ Evaluation of predicates At the end of Section <ref> we showed how to start a simulation from a state different from the initial state. We also said that this entails some risks as manually writing the start state is error prone which may lead to an unsound state. In this section we will see how to avoid this problem by using a feature of that is useful for other verification activities, too. Let's consider the following state of the birthday book: Known = maxi,caro,cami,alvaro Birthday = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400] Starting a simulation from this state may give incorrect results if it doesn't verify the state invariant defined for the specification. Recall that the state invariant for the birthday book is . Hence, we can check whether or not the above state satisfies the invariant by asking to solve the following: Known = maxi,caro,cami,alvaro Birthday = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400] birthdayBookInv(Known,Birthday). in which case returns the values of +Known+ and +Birthday+, meaning that +birthdayBookInv+ is satisfied. If this weren't the case the answer would have been +no+, as in the following example (note that +maxi+ is missing from +known+): Known = caro,cami,alvaro Birthday = [maxi,160367],[caro,201166],[cami,290697],[alvaro,110400] birthdayBookInv(Known,Birthday). § PROVING THE CORRECTNESS OF FORGRAMS Evaluating properties with helps to run correct simulations by checking that the starting state is correctly defined. It also helps to test whether or not certain properties are true of the specification or not. However, it would be better if we could prove that these properties are true of the specification. In this section we will see how allows us to prove that the operations of a specification preserve the state invariant. So far we have used as a programming language. However, is also a satisfiability solver[See for instance Wikipedia: https://en.wikipedia.org/wiki/Satisfiability_modulo_theoriesSatisfiability modulo theories.]. This means that is a program that can decide if formulas of some theory are satisfiable or not. In this case the theory is the theory of finite sets and binary relations given by the operators listed in Tables <ref> and <ref>, and combined with linear integer arithmetic[In what follows we will only mention the theory of finite sets but the same is valid for this theory combined with linear integer algebra.]. If F is a formula depending on a variable, we say that F is satisfiable if and only if: ∃ y: F(y) In the case of , y is quantified over all finite sets. Therefore, if answers that F is satisfiable it means that there exists a finite set satisfying it. Symmetrically, if says that F is unsatisfiable it means that there is no finite set satisfying it. Formally, F is an unsatisfiable formula if: ∀ y: ¬ F(y) where y ranges over all finite sets. If we call G(x) ¬ F(x) then (<ref>) becomes: ∀ y: G(y) which means that G is true of every finite set. Putting it in another way, G is valid with respect to the theory of finite sets; or, equivalently, G is a theorem of the theory of finite sets. [ toprule=2mm, before skip=10pt plus 2pt, after skip=10pt plus 2pt] In summary, if decides that F is unsatisfiable, then we know that ¬ F is a theorem. In other words, (<ref>) and (<ref>) are two sides of the same coin: (<ref>) says that F is unsatisfiable and (<ref>) says that G (i.e. ¬ F) is a theorem. If is called on some formula there are four possible behaviors: * returns . This means the formula is unsatisfiable. * returns one or more solutions. This means the formula is satisfiable. For example, the simulations we run in Section <ref> are all satisfiable formulas. * returns a warning messages. This means the answer is unreliable. We can't be sure whether the formula is satisfiable or not. * doesn't seem to return. You wait in front of the screen after pressing the return key but no answer is produced; you wait longer but still nothing happens. This means that is unable to determine whether the formula is satisfiable or not. This in turn may occur because the formula is too complex and makes to take a very long time of just because enters into an infinite loop. Situations like this are rare and usually occur in complex problems. If you want to see this behavior try the following: comp(R,R,R) [X,Y] in R [Y,Z] in R [X,Z] nin R. What is the meaning of this formula? One important aspect is that , as other satisfiability solvers, automatically decides the satisfiability of a given formula. That is, no action from the user is required. Hence, when finds that F is unsatisfiable it has automatically proved the theorem ¬ F. This is called automated theorem proving which is part of automated software verification. There are, however, automated theorem provers that aren't satisfiability solvers[See for instance Wikipedia: https://en.wikipedia.org/wiki/Automated_theorem_provingAutomated theorem proving.]. Satisfiability solvers and automated theorem provers can be used to prove mathematical theorems but we're interested in their application to software verification. More specifically, we're going to apply 's capabilities for automated theorem proving to ensure machine consistency. Recall that in Section 5 of “Introduction to the B-Method” we show that the B-Method requires to discharge some proof obligations once we have written a B machine. Then, we're going to use to discharge those proof obligations on the corresponding forgram. That is, once we have translated the B specification into , we're going to use to generate the same proof obligations required by the B-Method and then we're going to use again to automatically discharge them. This process implies that the forgram so verified becomes a certified prototype of the system. In other words, the forgram is an implementation verifying all the the verification conditions set forth by the B-Method. §.§ Invariance lemmas in The most complex verification conditions required by the B-Methods are the invariance lemmas. Recall that an invariance lemma states that each operation of a B specification preserves the state invariant. Formally, if an operation depends on an input parameter x, has precondition Pre and changes state variable v with Post, the invariance lemma is as follows: ∀ x. (Inv Pre Inv[v ↦ Post]) In turn, when this operation is translated as a clause we have v_ as the next-state variable. The abstract assignment v := Post becomes an equality of the form v_ = Post. Therefore, the invariance lemma can be written as follows: ∀ x. (Inv Pre Inv[v ↦ v_]) If we define Inv_ as a shorthand for Inv[v ↦ v_], then we have: ∀ x. (Inv Pre Inv_) Recall that in order to prove the above formula in we must negate it: ¬(∀ x. (Inv Pre Inv_)) At the same time during the translation of the B-Machine into , we have split the invariance in several pieces. Recall that for the birthday book specification we have the following: birthdayBookInv(Known,Birthday) :- dom(Birthday,Known) pfun(Birthday). Then, for instance, this is the invariance lemma for : addBirthday_pi_birthdayBookInv :- neg( birthdayBookInv(Known,Birthday) addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) implies birthdayBookInv(Known_,Birthday_) ). The idea is that the user executes and answers . As we have said above, this means that couldn't find values for the variables as to satisfy the formula (i.e. the formula is unsatisfiable). In turn, as we have explained, this means that the formula inside is a theorem and so has discharged this proof obligation. There's, though, a problem that we need to address. Internally, transforms the body of in: birthdayBookInv(Known,Birthday) addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) neg( birthdayBookInv(Known_,Birthday_) ). because ¬(I T I_) ≡¬(¬(I T) I_) ≡ I T ¬ I_. The problem is that can't compute the negation of user-defined clauses. Then, will issue a warning such as: ***WARNING***: Unsafe use of negation - using naf In order to avoid this problem we have to help to compute the negation of the clauses declared as invariants. More precisely, we have to add the following to the birthday book forgram: dec_p_type(n_birthdayBookInv(kn,bb)). n_birthdayBookInv(Known,Birthday) :- neg(dom(Birthday,Known) pfun(Birthday)). That is, for each clause declared as an invariant, a clause named with the same arity and whose body is the negation of 's body, is added to the forgram. In this way when has to compute it looks up among the clauses one whose head is and with 's arity. If such a clause is present, uses its body to compute the negation; otherwise it issues a warning message such as the one above. These clauses are called negative clauses. Note that negative clauses aren't declared as invariants although their types are those of the corresponding positive clauses. See Appendix <ref> for the complete forgram implementing the birthday book. Recall that +neg+ doesn't always work correctly, as we explained in Section <ref>. However, it works well in many cases. You won't see problems with in what concerns the exercises of this course. You can have a look at the problem of computing ¬ p in logic programming in Wikipedia: https://en.wikipedia.org/wiki/Negation_as_failureNegation as failure. In any case, if you are in front of a formula for which doesn't work well, you can manually write its negation and put it in a negative clause. To that end you have to distribute the negation all the way down to the atoms at which point you use the negations of the operators of Tables <ref> and <ref>. §.§ The verification condition generator (VCG) can automatically generate verification conditions similar to those required by the B-Method, plus some more not required by the B-Method. That is, generates verification conditions as those discussed in Section 5 of “Introduction to the B-Method”. We'll exemplify the process to generate verification conditions with the birthday book forgram. VGC stands for verification condition generator. The command takes as argument the name of a file containing a forgram implementing a state machine (in particular one resulting from the translation of a B machine). That is, the forgram must have declarations such as , , etc. as described in Section <ref> and in chapter 11 of the user's manual. VCG checks some well-formedness conditions on the forgram as described in detail in the referred manual. If all these checks are passed then VCG generates a file named, for instance, . Appendix <ref> lists the contents of as produced by VCG. Once VCG has been called on a file, the user has to consult the file generated by VCG and run the command indicated by : As can be seen, says that we should call to run or discharge the verification conditions. This command is always of the form . If we run the command we'll see the following: log=> check_vcs_bb. Checking birthdayBookInit_sat_birthdayBookInv ... OK Checking addBirthday_is_sat ... OK Checking findBirthday_is_sat ... OK Checking remind_is_sat ... OK Checking addBirthday_pi_birthdayBookInv ... OK Checking findBirthday_pi_birthdayBookInv ... OK Checking remind_pi_birthdayBookInv ... OK As you can see, is able to automatically discharge all proof obligations. However, this might not always be the case. Why might be unable to discharge a proof obligation and how to remedy this situation is explained in the next section. VCG generates basically two classes of verification conditions: * Satisfiability Conditions. These are identified by the word . For example, - and . The expected answer for a satisfiability condition is a solution. In other words, if answers for such a verification condition there's an error in the specification. * Invariance Lemmas. These are identified by the word (for “preserves invariant”). For example, . The expected answer for an invariance lemma is . In other words, if returns a solution for such a verification condition there's an error in the specification. §.§ When fails to discharge a proof obligation We'll focus this section on invariance lemmas but similar conclusions can be drawn for satisfiability conditions. may fail to discharge (i.e. prove) an invariance lemma, basically, for two reasons: * The invariant is wrong. In this case, the invariant is either too strong or too weak. If it's too strong, it means that you're asking too much to your system. You want your system to verify some invariant but it can't. For example, the following is too strong for the savings account system: sa ∈ NIC pfun(sa) ∀ x,y . (x ↦ y ∈ sa 0 < y) If it's too weak it means that you're allowing some operations to be called from states they don't expect to be called. For example, the following is too weak for the birthday book: birthday ∈ NAME DATE * The operation is wrong. The most common situation is to have a weaker precondition than needed. For example, the following specification of addBirthday has a precondition making the operation to fail to verify birthday ∈ NAME DATE: msg ←addBirthday (name, date) 1    name ∈ NAME date ∈ DATE 1 known, birthday, msg := known ∪{name}, birthday ∪{name ↦ date}, ok 1 Can you tell why? Can you provide a counterexample? In order to see how behaves when it fails to prove an invariance lemma, let's assume that the invariant for the birthday book is just: . In this case the invariance lemma for is as follows: neg( pfun(B) addBirthday(K,B,N,C,K_,B_,M) implies pfun(B_) ). When is asked to solve the above formula the answer is the following: B = [N,_N2]/_N1, K_ = N/K, B_ = [N,C],[N,_N2]/_N1, M = ok Constraint: pfun(_N1), comppf([N,N],_N1,), N nin K, C neq _N2 As the above formula is satisfiable (which means that the formula inside isn't a theorem), returns a solution that, in this case, is read as a counterexample. That is, returns an assignment of values to variables showing that doesn't preserve the invariant. By analyzing the counterexample we can discover why fails to preserve the invariant giving us the chance to fix the error. The first thing we can do to analyze the counterexample is to replace all the set variables by the empty set[Except those at the left-hand side of the equalities.]. After a little bit of simplification we obtain: B = [N,_N2], K = , K_ = N, B_ = [N,C],[N,_N2], M = ok Constraint: C neq _N2 Observe that considers executing with and . This clearly violates (birthday) = known. Actually, if we add this condition to the invariance lemma, returns . neg( pfun(B) dom(B,K) addBirthday(K,B,N,C,K_,B_,M) implies pfun(B_) ). Clearly, now can't execute from a state not verifying (birthday) = known. Recall that in Section <ref> we said that the B invariant can be encoded in as several clauses (one for each conjunct in the section). In this case, may fail to prove some invariance lemmas because it needs some of the other invariants as hypothesis. Think that if we separate the invariant of the birthday book in two clauses as we suggest at the end of Section <ref>, won't be able to prove that preserves for the same reason analyzed above. The missing hypothesis can be manually conjoined to the invariance lemmas generated by VCG. [ toprule=2mm, before skip=10pt plus 2pt, after skip=10pt plus 2pt, title=Forgrams] What is a forgram? Forgram is a portmanteau word resulting from the combination of formula and program. A forgram is a piece of code that enjoys the formula-program duality. In other words, a forgram is a piece of code that can be used as a formula and as a program. In Section <ref> we showed that code can be executed as a program; and in Section <ref> we showed that code can be used as a formula. In engineers write forgrams, instead of plain programs. [ toprule=2mm, before skip=10pt plus 2pt, after skip=10pt plus 2pt, title=Mathematics in Software Development] If now most of you are convinced that mathematics is an essential tool for software development, then this course has achieved its objectives. plain § EXERCISES [ toprule=2mm, before skip=10pt plus 2pt, after skip=10pt plus 2pt] Unless stated differently, the proofs indicated in these exercises must be done with . * Implement in the following operations of the B specification of the savings account system. * Open an account * Deposit money in an account * Withdraw money from an account * Query the current balance of an account * Close an account * Write it in the operation specified in exercise 40 of IBM[Introduction to the B-Mehtod]. * Concerning exercise <ref>, can you write a clause that reuses the clauses defined in exercises <ref> and <ref>? * Run basic simulations that simulate all the disjuncts of all operations implemented in exercise <ref>. * Can you run symbolic simulations on the prototype developed in exercise <ref>? Justify. If you can, do it and analyze the results. For the operations you think you can't, what are your options? * In the operation of the exercise <ref> we have the following abstract assignment: sa := sa ∪{(n?,0)} Say we aren't sure this is the right statement. Then, we can simulate the operation with different values to try to decide if the predicate is the right one or not. To this end we will consider the following partition for expressions of the form S ∪ T. [ S = ∅, T = ∅ S ≠∅, T ≠∅, S ⊂ T; S = ∅, T ≠∅ S ≠∅, T ≠∅, T ⊂ S; S ≠∅, T = ∅ S ≠∅, T ≠∅, T = S; S ≠∅, T ≠∅, S ∩ T = ∅ S ≠∅, T ≠∅, S ∩ T ≠∅, S ⊈T, T ⊈S, S ≠ T ] How would you do to simulate the implementation of exercise <ref> taking this partition as a reference? Once you have found the method, run the simulations. * Can you think in a systematic way of generating simulations to do what we asked to do in exercises <ref> and <ref>? * Specify in B an operation that opens several accounts at once. Then, translate it into . Finally, apply what you've learned in exercise <ref>. * Write in the following formulas. * ¬ x ∈ (A ∪ B) * ¬ (x ∈ A x ∈ B) * ¬ A = B ∩ C * ¬ (A ∪ B = B ∪ A) * ¬ (A ∩ B = ∅ A = A ∖ B) * ¬ (A ⊆ B A R ⊆ B R) * Execute in the formulas of exercise <ref>. Explore all the solutions returned by the tool. Explain why returns that. * Do exercise 12 of IBM in . * Prove the results of the following exercises of IBM: 8, 9, 10, 15, 19-24. * Prove that the two clauses defined in exercises <ref> and <ref> are equivalent. § THE FORGRAM OF THE BIRTHDAY BOOK variables([Known,Birthday]). def_type(bb,rel(name,date)). def_type(kn,set(name)). def_type(msg,enum([ok,nameExists])). invariant(birthdayBookInv). dec_p_type(birthdayBookInv(kn,bb)). birthdayBookInv(Known,Birthday) :- dom(Birthday,Known) pfun(Birthday). dec_p_type(n_birthdayBookInv(kn,bb)). n_birthdayBookInv(Known,Birthday) :- neg(dom(Birthday,Known) pfun(Birthday)). initial(birthdayBookInit). dec_p_type(birthdayBookInit(kn,bb)). birthdayBookInit(Known,Birthday) :- Known = Birthday = . operation(addBirthday). dec_p_type(addBirthday(kn,bb,name,date,kn,bb,msg)). addBirthday(Known,Birthday,Name,Date,Known_,Birthday_,Msg) :- (Name nin Known un(Known,Name,Known_) un(Birthday,[Name,Date],Birthday_) Msg = ok or Name in Known Known_ = Known Birthday_ = Birthday Msg = nameExists ). operation(findBirthday). dec_p_type(findBirthday(kn,bb,name,date,kn,bb)). findBirthday(Known,Birthday,Name,Date,Known,Birthday) :- Name in Known applyTo(Birthday,Name,Date). operation(remind). dec_p_type(remind(kn,bb,date,kn,kn,bb)). remind(Known,Birthday,Today,Cards,Known,Birthday) :- rres(Birthday,Today,M) dom(M,Cards) dec(M,bb). § FILE GENERATED BY VCG FOR THE BIRTHDAY BOOK
http://arxiv.org/abs/2407.02856v1
20240703071425
Early-Stage Anomaly Detection: A Study of Model Performance on Complete vs. Partial Flows
[ "Adrian Pekar", "Richard Jozsa" ]
cs.LG
[ "cs.LG", "cs.CR" ]
Early-Stage Anomaly Detection: A Study of Model Performance on Complete vs. Partial Flows Adrian Pekar and Richard Jozsa A. Pekar and R. Jozsa are with the Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary. A. Pekar is also with the HUN-REN-BME Information Systems Research Group, 1117, Magyar Tudósok krt. 2, Budapest, Hungary. Corresponding author: A. Pekar (apekar@hit.bme.hu) July 8, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT This study investigates the efficacy of machine learning models, specifically Random Forest, in anomaly detection systems when trained on complete flow records and tested on partial flow data. We explore the performance disparity that arises when models are applied to incomplete data typical in real-world, real-time network environments. Our findings demonstrate a significant decline in model performance, with precision and recall dropping by up to 30% under certain conditions when models trained on complete flows are tested against partial flows. Conversely, models trained and tested on consistently complete or partial datasets maintain robustness, highlighting the importance of dataset consistency in training. The study reveals that a minimum of 7 packets in the test set is required for maintaining reliable detection rates. These results underscore the need for tailored training strategies that can effectively adapt to the dynamics of partial data, enhancing the practical applicability of anomaly detection systems in operational settings. early-stage anomaly detection, cybersecurity, complete flow information, partial flow information. § INTRODUCTION In the rapidly evolving field of network security, the effectiveness of machine learning (ML)-based anomaly detection systems is increasingly being tested in real-time environments. A fundamental limitation in current research is the gap between the conditions under which ML models are developed and validated, and the dynamic nature of real-world network operations. This paper addresses a critical aspect often overlooked: the disparity between models trained on complete flow records and the practical necessity of real-time anomaly detection using partial or evolving flow information. Existing research predominantly relies on datasets generated to capture comprehensive flow records, encompassing the full lifecycle of network flows <cit.>. This approach, while detailed, does not accurately reflect the transient and incomplete nature of data in real-time network monitoring. In real-world scenarios, anomaly detection systems must often make swift decisions based on incomplete information—a stark contrast to the complete flow records used in standard dataset configurations. This discrepancy raises significant concerns about the transferability of research findings to real-time applications, where the speed and accuracy of anomaly detection are paramount. To address this disconnect, our work evaluates how the performance of machine learning (ML) models, specifically Random Forest (RF), varies under different training and testing conditions. We experiment with models trained on complete flow records and tested on incomplete ones, as well as models trained on incomplete data and tested on the same. To simulate these scenarios, we define specific thresholds on packet counts and flow durations, assessing the effectiveness of the models with both fully and partially captured datasets. Random Forest is selected for this investigation due to its common use in cybersecurity and anomaly detection, as demonstrated in <cit.>. By incorporating RF into our analysis, we establish a common baseline for comparability, facilitating a more grounded evaluation of the models' effectiveness in handling incomplete data sets in security contexts. Our findings reveal that models trained on complete flows and tested on partial flows experience a notable decline in performance, with precision and recall dropping by up to 30% under certain conditions. Conversely, except for some isolated cases, models trained and tested consistently on either complete or partial flows maintain robustness, illustrating the critical impact of dataset consistency on model efficacy. This disparity emphasizes the challenge of transferring lab-based model accuracy to real-world, real-time detection systems, where data incompleteness can markedly impact detection capabilities. Our study also demonstrates that, for the dataset examined, at least 7 packets in the test set are required to maintain acceptable detection rates in real-time scenarios, underlining the need for model adjustments or enriched training strategies to handle partial data effectively. The rest of this paper is organized as follows: <Ref> discusses the selection of thresholds for model evaluation, including an overview of the used dataset and the ML algorithm employed. <Ref> details the process of raw data preprocessing, flow measurement, and the production of complete and partial flow datasets. <Ref> presents the performance comparison of the models when applied to complete and partial flows, focusing on precision, recall, and F1-score metrics. <Ref> interprets the results, explores the study’s limitations, discusses prior studies that have addressed similar challenges. The paper concludes in <Ref>. § BACKGROUND §.§ Readiness Criteria for Flow Categorization In this paper, we opt to investigate the performance of machine learning algorithms (MLA) where the flows are considered ready for categorisation based on two separate thresholds: * First we investigate the performance of MLAs under a packet count threshold, focusing on the quantitative aspects of network communication. * Subsequently, we explore the efficacy of these algorithms under a time window threshold, emphasizing the temporal dynamics of network flows. This dual-pronged approach allows us to comprehensively assess the behavior and effectiveness of MLAs in scenarios that closely mirror real-world conditions. By separately analyzing these two threshold mechanisms, we aim to provide a more detailed and nuanced understanding of MLAs in real-time anomaly detection. This methodology aligns with our objective to improve the applicability and accuracy of anomaly detection systems, ensuring they are well-equipped for the challenges presented by contemporary network environments. §.§ Dataset In this research, we leverage the CICIDS-2017 dataset <cit.>, an extensive collection of labeled network traffic flows. This dataset, gathered from July 3 to July 7, 2017, is detailed across multiple PCAP files, segmented by each day of the week from Monday to Friday. Specifically, Monday's dataset exclusively features benign traffic, while the data from Tuesday to Friday encompasses a varied mix of both benign and malicious traffic patterns, including DoS/DDoS, Port Scan, Brute Force, and Infiltration events, with each day presenting distinct types of attacks. Due to the complexity of the dataset's structure, producing a day-wise evaluation that is both comprehensive and accessible presents considerable challenges. Therefore, our analysis is specifically concentrated on the segment of network traffic from Wednesday, notable for its wide range of Denial-of-Service (DoS) attacks. This focus allows for a detailed examination of typical DoS attacks—such as DoS Hulk, DoS GoldenEye, DoS Slowloris, and DoS Slowhttptest—alongside the Heartbleed attack. It is crucial to note that the CICIDS-2017 dataset's integrity has recently come under scrutiny due to errors identified in previous assessments by  <cit.>,  <cit.>, and elaborated upon in our own study <cit.>. These inaccuracies present risks to the validity of research findings that rely on this dataset. In response, our methodology involves a careful preparation of flow records derived from the original Wednesday PCAP file, ensuring accurate labeling of flows in alignment with the detailed attack generation methodology described by the original dataset contributors. §.§ ML Algorithm In this study, we investigate the effects of incomplete flow information on network anomaly detection by employing the Random Forest algorithm, a technique widely recognized and frequently utilized in machine learning. RF, an ensemble learning method, builds numerous decision trees during the training phase and aggregates their predictions (in classification, this means taking the mode of the classes predicted by individual trees). This process not only boosts overall predictive accuracy but also helps prevent overfitting. Opting for the RF algorithm enables us to draw on a method validated in related research <cit.>, thereby allowing us to uncover the broader implications of utilizing complete versus partial flow information. This approach facilitates the critical evaluation of the relevance and applicability of existing results within the context of real-time network security analytics. § METHODOLOGY §.§ Raw Data Preprocessing Our methodology began with the preprocessing of the Wednesday raw packet trace file from the CICIDS-2017 dataset. Following the strategy outlined by  <cit.>, we initially removed duplicate packets using the editcap command (). Subsequently, we organized out-of-order packets using the reordercap command (). These preprocessing steps were aimed at minimizing potential biases in our results caused by undocumented anomalies likely present in the raw packet traces, as suggested by  <cit.>. §.§ Flow Metering For the critical task of processing the PCAP file, we utilized NFStream <cit.>, a Python-based tool designed for efficient, flexible, and detailed data processing in network analysis. NFStream is particularly effective at converting raw network traffic traces into structured data suitable for advanced analytics. It features robust flow measurement and feature computation capabilities, underpinned by a versatile architecture. This adaptability is notably evident in its NFPlugin component <cit.>, which allows for the integration of custom network functionalities. In our research, this adaptability was crucial, enabling the incorporation of a precise flow labeling methodology directly into the analytical framework. Our labeling mechanism is designed to classify network flows into distinct categories based on predefined criteria that reflect the attack patterns and benign behaviors documented by the original dataset contributors <cit.>. It systematically assigns labels by evaluating each flow's metadata, including source and destination IP addresses, port numbers, protocol types, packet payload sizes, and temporal initiation details. Additionally, this mechanism includes a feature that, upon activation, reverses the flow's direction by swapping the source and destination parameters. This ensures accurate flow labeling, particularly in scenarios where packets from the same flow may be segmented into subflows. §.§ Preliminary Measurement The objective of our preliminary measurement was to closely emulate the conditions of the original CICIDS-2017 dataset, aiding in the calibration of flow measurement settings for complete flow records. Accordingly, we adjusted the idle and active timeout settings in NFStream to 60 and 120 seconds, respectively, a change from the default parameters of 120 and 1800 seconds. From the results obtained, several conclusions can be drawn: * We observed a considerable number of flows with zero packet payload (ZPL), a characteristic not associated with any of the attacks identified in the Wednesday traffic trace, as detailed in <Ref>. * A considerable number of flows featured an unusually high count of FIN and RST packets. <Ref> summarizes these counts, specifically highlighting flows with more than two FIN or RST flags. Interestingly, the attacks observed on Wednesday (various DoS attacks and Heartbleed) are not characterized by an increased number of packets with TCP FIN or RST flags. This finding suggests that the packet traces may capture not only the attack signatures but also the repercussions, such as servers potentially saturated and beginning to terminate connections by sending packets with these TCP flags. * The Heartbleed attack appears as one prolonged attack that is segmented into multiple flow records by the active timeout setting. * A pattern of repeated flows was identified across the dataset, distinguished using a five-tuple of source and destination IP and port numbers, alongside the protocol identifier. * Numerous flows exhibited a Packet Inter-Arrival Time (PIAT) marginally below the idle timeout, set at 60 seconds. These instances might suggest that separate flows were amalgamated into a single flow due to the idle timeout configuration. Alternatively, this behavior could imply that some attacks were deliberately kept active, with packets sent just before the expiration of the timeout. Complementary insights supporting the observations detailed above are available as a digital artifact, accessible via <cit.>. §.§ Producing Complete Flows Following the insights from <Ref>, we configured NFStream to generate complete flows with these settings: * The initial timeout is maintained at 60 seconds, falling within the range of typical settings for Linux Kernel netfilter and IPv4 TCP-specific networking configurations. * The active timeout is set to 18,000 seconds (5 hours) to prevent the segmentation of exceptionally long flows due to timeout expiration. * A TCP FIN/RST flag-based flow expiration policy has been implemented, which terminates flows at the first detection of either a FIN or RST flag, whether at flow initiation or upon update. This approach aims to exclude the aftermath of attacks that manifest as connection terminations. As a result, residual flow fragments, typically single-packet flows marked by a FIN or RST flag, are excluded. * The flow start time is now incorporated into the flow ID hash, enhancing the ability to match complete flows with their corresponding partial counterparts. Potential duplicate flow hash entries (despite this addition to the six-tuple used for unique flow identification) are discarded. * All flows with zero packet payloads (ZPL) are excluded. * The Heartbleed attack is omitted due to an insufficient number of samples for meaningful classification performance evaluation. The distribution of the refined dataset, containing complete flows generated in accordance with the aforementioned methodology, is detailed in <Ref>. Complementary insights supporting the decisions detailed above are available as a digital artifact, accessible via <cit.>. Compared to the data in <Ref>, <Ref> presents a marginally higher number of flow records for specific types. This increase in <Ref>, even after excluding flows with ZPL, results from our TCP flow expiration policy, which segments flows with multiple, irregular sequences of TCP FIN and RST flags into separate subflows. To mitigate any potential bias arising from such segmentation, post-processing steps are undertaken. In this phase, partial flows are matched with their complete counterparts using their six-tuple identification. This ensures that the initial segments of partial flows are accurately compared and aligned with the corresponding complete flows. §.§ Producing Partial Flows In our partial flow measurement approach, we implemented two key mechanisms: §.§.§ Packet count the first mechanism targets the precision of packet counts within flows: it ensures that only flows with an exact number of packets, as predefined, are retained for analysis. This strict selection criterion allows us to focus sharply on specific data exchange patterns, eliminating any flow that does not meet the exact packet count threshold. The range for this measurement was selected as N_pc = {2, 3, 4, …, 20}. §.§.§ Flow duration the second mechanism deals with the duration of flows. Unlike the strict packet count approach, this mechanism allows for a degree of flexibility by retaining flows whose durations fall within a ±20% range of a specified target duration. This variance accommodates the dynamic nature of network traffic, ensuring that our analysis remains robust without being excessively restrictive. By applying this range, we acknowledge the natural fluctuations in flow durations while still maintaining a focus on our targeted temporal scope. The range for this measurement was selected as N_fd = {5, 10, 50, 100, 150, 300, 500, 1000, 5000, 10000, 15000, 20000} milliseconds. In the process of producing partial flows, direct flow labeling has not been conducted. Instead, our focus was on identifying the complete flow counterparts for each partial flow using six-tuple hashes, subsequently assigning the corresponding labels from the complete flows to their partial counterparts. During this process, we also excluded any partial flows that did not meet the minimum value criteria specified in N_pc and N_fd for each respective dataset. These methodologies, by specifically targeting packet count and flow duration, facilitate a nuanced analysis of the impact of partial flows on anomaly detection. This approach provides a comparative insight into how partial flow information influences the detection process relative to analyses based on complete flow information. § RESULTS §.§ Packet Count-based Evaluation §.§.§ CF and PF Dataset Distribution <Ref> presents the dataset distributions for packet count-based evaluation. The table starts with the "Complete Flows" (CF) category, which encompasses the ground truth dataset (see <Ref>) and then breaks down into partial flows labeled as PC=N, where N ranges from 2 to 20, representing datasets characterized by specific packet counts. For each category, <Ref> lists the total number of flows, classified into benign and anomalous types, with anomalies further subdivided into distinct types of DoS attacks. The table also includes metrics on the minimum, mean, and maximum durations (in milliseconds) for these categories. Analysis of <Ref> reveals a significant decrease in the total number of flows as the packet count per flow increases. The dataset starts with 502,350 network flows in the CF category, predominantly benign (326,363), with 175,987 flows categorized as anomalous, spanning four types of DoS attacks. In contrast, at PC=20, the dataset contains only 51,267 flows, with 50,961 being benign and merely 306 classified as anomalous. This trend continues across specific attack types: as packet counts in flows increase, the number of records correspondingly decreases, suggesting that many attacks transmit only a few packets. Moreover, there is an observable increase in the maximum duration of attacks as packet count per flow increases, especially notable in Slowhttptest and Slowloris attacks, which are known for prolonged durations. Thus, the packet count per flow crucially influences the observed characteristics of network attacks, with higher packet counts typically associated with longer durations. Conversely, many attacks consist of flows with few packets, indicating that extracted flow statistics might lack sufficient information for precise anomaly detection, complicating effective detection mechanisms. §.§.§ CF vs. PF Performance Comparison <Ref> illustrates the precision, F1-score, and recall for both binary (<Ref>) and multi-class (<Ref>) classification under three scenarios: * CF used for both training and testing, * PC=N for both training and testing, and * CF used for training with PC=N for testing. Note that PC ranges from 2 to 17, unlike in <Ref>, because PCs 18, 19, and 20 lacked enough samples for meaningful comparison. Additionally, before training, we selected only those flows from the CF dataset that intersect with the flow hash used as identification in the respective partial flow dataset. From <Ref>, it is observed that both CF train/CF test and N train/N test scenarios achieve sufficient performance starting at PC=2, though performance is slightly lower at minimal PC values in the N train/N test scenario. However, the CF train/N test scenario highlights the negative impacts of insufficient information due to lower packet counts, indicating that at least 8 packets are necessary in the N test to achieve usable performance using CF train model, with some minor fluctuations at higher packet counts. The multi-class classification in <Ref> similarly shows that low packet counts negatively affect the N train/N test scenario, requiring at least 6 packets to achieve acceptable performance. Additionally, even the highest packet count seems inadequate for stable performance across precision, F1-score, and recall in the CF train/N test scenario, underscoring the challenges of anomaly detection with limited data when only completed flows have been used for training. §.§ Flow Duration-based Evaluation §.§.§ CF and PF Dataset Distribution Similar to <Ref>, <Ref> outlines the dataset distributions for flow duration-based evaluation. This methodical categorization delineates a transition from "Complete Flows" (CF) to progressively more granular "Partial Flows" (PF), identified as FD=N, where N encompasses a range of specified durations 5, 10, 50, 100, 150, 300, 500, 1000, 5000, 10000, 15000, 20000 milliseconds. Each category within <Ref> specifies the total number of flows, distinguishing between benign and anomalous flows, with the latter further divided into distinct DoS attack types. Contrasting with <Ref>, which detailed flow durations, this table provides insights on the minimum, mean, and maximum packet counts for these categories. Upon examining <Ref>, a similar trend emerges as was seen with packet count-based datasets; however, when flow duration thresholds define partial flow generation, there's a dramatic drop in flow counts for each dataset. This decrease persists even though our methodology accommodates flows with durations within ±20% of the threshold values, as detailed in <Ref>. The statistics in flow count with various flow durations underscores the challenge of capturing meaningful data within constricted time frames, particularly for anomaly detection where comprehensive flow information is crucial. Across all flow we see a fluctuating count of flows per most cateogries. For example, for DoS Hulk with FD=10ms, there are 81,321 flows, which decreases to 66,738 flows at FD=150ms, drops further to 2,832 at FD=500ms, and then unexpectedly rises again to 13,856 at FD=1000ms. Contrary to expectations, the flow count does not monotonically increase with longer durations. The irregular trend could result from the inherent nature of network traffic during a DoS Hulk attack, where periods of intense activity are interspersed with lulls. As the duration threshold increases, the dataset may initially miss capturing these shorter bursts of activity, resulting in a lower count. Yet as the threshold extends further, it may again encompass subsequent waves of attack traffic, thus accounting for the rise in flow count. §.§.§ CF vs. PF Performance Comparison <Ref> depicts the performance metrics of precision, F1-score, and recall for binary and multi-class classifications across a range of FD thresholds. For the sake of this analysis, only FD=5, 10, 150, 500, 1000, 15000, and 20000 ms are included, as other durations did not have enough data to yield statistically significant results. Analogous to the packet count-based evaluation, prior to training, we exclusively selected flows from the CF dataset that correspond with the flow hash identifiers present within the respective partial flow dataset. <Ref> confirms the trend observed in packet count evaluations. When the training and testing datasets match (CF train/CF test and N train/N test), binary classification performance is strong and stable. However, the model's ability to accurately classify benign and anomalous flows is compromised in the CF train/N test scenario, reflecting the challenge of applying a model trained on complete flow data to partial flows characterized by various durations. In <Ref>, this issue is further exacerbated in the multi-class classification scenario. Here, even the N train/N test performance demonstrates a notable decline, particularly at FD=1000ms, suggesting that as with certain flow durations set as threshold, the model's ability to distinguish between multiple classes of traffic—beyond benign and anomalous—becomes is unreliable. This decline in performance may be attributed to the reduced information content in flows with various duration, which does not adequately capture the complexities of multi-class traffic patterns. The performance dip in the CF train/N test scenario for multi-class classification is even more pronounced, reinforcing the notion that comprehensive flow data is crucial for developing robust models. The precision, F1-score, and recall all experience a steep drop when the model trained on complete data is tested on partial data. This emphasizes the model's dependence on the quantity and quality of training data and the challenge of applying it to substantially different test conditions. § DISCUSSION This study expands our packet count- and flow duration-based evaluations with additional digital artifacts <cit.>. These materials provide a more comprehensive set of metrics, including accuracy, balanced accuracy, and confusion matrices. Moreover, they offer visualizations that extend beyond the scope of those included in the main body of this paper. §.§ Implications Our findings confirm that models trained and tested on the same data distribution (CF or N) exhibit commendable performance. However, a decrement in packet count introduces a performance dip, suggesting that a model with less data may struggle to generalize well. This effect is particularly pronounced in cases where models trained on complete data (CF) are applied to partial data (PC=N), highlighting potential overfitting to the complete data's characteristics. The analysis indicates that while higher packet counts are associated with extended flow durations, attacks with fewer packets could present challenges for precise anomaly detection due to insufficient detail in flow statistics. For effective binary classification, a threshold of at least 7 packets in the test data is essential to ensure performance reliability when classifying using CF dataset (CF train/N test), while for multi-class classification, at least 6 packets are necessary when trained and tested on the same dataset (N train/N test). Fluctuations in performance across various packet counts emphasize the significance of selecting an optimal packet count threshold for maintaining classification accuracy. When training and testing datasets are aligned, performance remains stable. Yet, the application of models developed on complete flows to partial flows poses a significant challenge, underscoring the importance of data quality and volume in training robust anomaly detection models. Additionally, the non-linear trends observed across different flow duration thresholds highlight the complexity inherent in network attack profiling, which cannot be solely based on duration metrics. The adaptive and dynamic nature of such attacks necessitates a more nuanced detection approach that can respond to varied attack patterns and evasion techniques. Interestingly, while a specific packet count threshold can be identified that yields stable classification performance, no such threshold emerged for flow duration. This underscores the challenge in defining a one-size-fits-all threshold for flow duration-based classification. Overall, the research presented here sheds light on the complex interplay between data distribution and classification efficacy, leading to essential considerations for deploying network security measures. Future models must be designed with the flexibility to adjust to varying levels of traffic information, ensuring that anomaly detection remains both robust and reliable. §.§ Limitations While this research provides valuable insights into anomaly detection using Random Forest models, it is not without limitations that may affect the generalizability and application of the findings. First, the study does not extend to the examination of flows with variable lengths. Our analysis primarily focuses on flows characterized by a fixed number of packets, assessing them from both packet count and flow duration perspectives. This approach limits our ability to fully understand the dynamics of more complex, variable-length network flows that often occur in real-world environments. The fixed-length assessment may overlook nuances that appear only in longer or dynamically changing sequences, potentially affecting the model's ability to generalize across different network conditions. Secondly, the current study focuses exclusively on the first N packets per flow. This methodology may miss crucial information contained in subsequent packets of a flow, especially in longer or continuous data streams. It might be beneficial to assess the flows through a sliding window approach or a sequence of sub-flows, which could capture changes and anomalies that develop beyond the initial packets. Additionally, analyzing partial flows with packet counts starting from 2 may not adequately address the complexities of modern protocols such as TLS and QUIC, where connection or session setup alone may require more packets than the lower range of our study. Focusing on such an early packet range might overlook crucial information for classification that appears only during or after the completion of these setups. Packet count thresholds also face challenges with hardware offloading technologies, which, after a certain number of packets, can shift the maintenance of flows to fast path processing. Such hardware-level interventions can obscure or completely hide parts of the flow from observation, complicating detection strategies that rely on early-stage categorization. Furthemore, our method of generating partial flows based on flow duration with different thresholds highlights a challenge. While allowing a degree of flexibility by retaining flows whose durations fall within a ±20% range of a specified target duration, a dramatic drop in flow counts for each dataset indicates that our approach may still be too restrictive. A more refined methodology is needed to better assess and accommodate the variable nature of flow durations, ensuring that our anomaly detection framework can effectively handle the diverse and dynamic characteristics of network traffic. Lastly, this study examines the performance of the RF algorithm. To assess generalizability, we also evaluated Decision Trees (DT), whose codes are accessible via <cit.>. While DT approached the performance of RF, generally, RF outperformed DT in our evaluations. However, an extended analysis involving a broader range of various MLAs is needed to draw more targeted conclusions about the effectiveness of different models. §.§ Related Work Over the past decades, the domain of early-stage classification has attracted considerable attention. In the realm of anomaly detection most solutions adopt a sliding window approach to gather data and conduct predictions on a per-window basis. For instance,  <cit.> utilize LSTM networks to analyze traffic statistics derived from packets received within fixed time intervals (Δ T = 1s), exploring the impact of various window sizes (W) on system accuracy. Diverging from conventional flow statistics,  <cit.> analyze the first N packets using CNNs and Autoencoders for classification. Similarly,  <cit.> deploy Autoencoders to detect DDoS and Port Scan attacks, employing diverse flow keys for data preprocessing and a mix of long and short time windows (W_2 = {10s, 10ms}) to optimize detection accuracy.  <cit.> aim to reduce human intervention by employing Autoencoders and a threshold-based Nearest-Neighbor Classifier, thereby enhancing the system’s classification accuracy as the number of manually reviewed alerts increases. Moreover,  <cit.> apply a periodic network monitoring strategy, generating flow statistics with sliding windows and assessing system speed across various micro-window durations (δ T), demonstrating accelerated processing at reduced δ T intervals. Lastly,  <cit.> implement an Autoencoder-based method for DDoS detection using direction-dependent flow statistics. While these studies contribute significantly to the field, they often do not comprehensively examine crucial aspects such as packet count and flow duration, both essential for ensuring robust and reliable anomaly detection in real-world scenarios. Our research contrasts the use of packet count and flow duration thresholds for early-stage anomaly detection, establishing a foundational understanding of the trade-offs between various threshold strategies for network traffic analysis. This nuanced approach allows our study to provide practical insights that can significantly influence real-time network security measures. §.§ Comparison with Relevant Research A recent study by  <cit.> bears direct relevance to our research. This work proposes a novel tree-based intrusion detection approach that processes a flow as a stream of packet headers, rather than using a fixed-size record structure that aggregates flow statistics. Notably, they employ a Set-Tree model which facilitates the use of set data inputs in tree-based models, including Random Forest, which we also assess in our study. Moreover, like our research, utilize the CICIDS2017 dataset, training their detector on complete streams but testing it with only the first packets of a flow. They report that often only 2 or 4 packets are sufficient for highly accurate detection. In contrast, our findings indicate that at least 7 packets are necessary in the test set to achieve adequate performance. This discrepancy may stem from methodological flaws in the CICIDS2017 dataset itself, a concern echoed by other studies <cit.> and our prior work <cit.>. claim exceptionally high performance (over 96%) without retraining their model for partial flow information testing, relying solely on the first 4 packets of each flow. However, considering that CICIDS2017 consists only of TCP flows, and only one packet represents effective data transmission following the TCP 3-way handshake, this suggests potential model overfitting. Moreover, in modern networks where TLS connection initialization requires more packets before data transmission begins, the practical applicability of their approach may be limited. Our study highlights these issues, stressing the importance of a robust and comprehensive evaluation framework for real-world network security environments. § CONCLUSION This study examined the efficacy of the Random Forest model in the context of anomaly detection, applying it to both complete and partial network flow data derived from a refined version of the CICIDS2017 dataset. Our findings underscore the necessity of using at least 7 packets to achieve reliable anomaly detection performance, contrasting with recent studies that suggest fewer packets may be sufficient. Our research contributes to the development of machine learning models that accurately reflect the challenges of analyzing partial flow data in real-time network environments. By aligning model development and validation with the operational dynamics inherent in real-time anomaly detection, this study helps bridge critical gaps in current research methodologies. Future work will focus on refining data preprocessing techniques to handle variable-length data more effectively and exploring the integration of more complex machine learning models that can automatically adjust to the dynamic nature of network traffic, potentially improving the accuracy and responsiveness of anomaly detection systems. § ACKNOWLEDGEMENT This work was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Supported by the ÚNKP-23-5-BME-461 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. The work presented in this paper was supported by project no. TKP2021-NVA-02. Project no. TKP2021-NVA-02 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme.
http://arxiv.org/abs/2407.02959v1
20240703095230
Competing for the most profitable tour: The orienteering interdiction game
[ "Eduardo Álvarez-Miranda", "Markus Sinnl", "Kübra Tanınmış" ]
math.OC
[ "math.OC", "cs.DM", "90B06, 90C10, 90C57" ]
1,2]Eduardo Álvarez-Mirandaealvarez@utalca.cl 3]Markus Sinnlmarkus.sinnl@jku.at 4]Kübra Tanınmışktaninmis@ku.edu.tr [1]Department of Industrial Engineering, Faculty of Engineering, Universidad de Talca, Sede Curicó, Chile [2]Instituto Sistemas Complejos de Ingeniería, Chile [3]Institute of Business Analytics and Technology Transformation/JKU Business School, Johannes Kepler University Linz, 4040 Linz, Austria [4]Department of Industrial Engineering, Koç University, 34742 İstanbul, Turkey Competing for the most profitable tour: The orienteering interdiction game [ ========================================================================== § ABSTRACT The orienteering problem is a well-studied and fundamental problem in transportation science. In the problem, we are given a graph with prizes on the nodes and lengths on the edges, together with a budget on the overall tour length. The goal is to find a tour that respects the length budget and maximizes the collected prizes. In this work, we introduce the orienteering interdiction game, in which a competitor (the leader) tries to minimize the total prize that the follower can collect within a feasible tour. To this end, the leader interdicts some of the nodes so that the follower cannot collect their prizes. The resulting interdiction game is formulated as a bilevel optimization problem, and a single-level reformulation is obtained based on interdiction cuts. A branch-and-cut algorithm with several enhancements, including the use of a solution pool, a cut pool and a heuristic method for the follower's problem, is proposed. In addition to this exact approach, a genetic algorithm is developed to obtain high-quality solutions in a short computing time. In a computational study based on instances from the literature for the orienteering problem, the usefulness of the proposed algorithmic components is assessed, and the branch-and-cut and genetic algorithms are compared in terms of solution time and quality. Interdiction games, Orienteering problems, Bi-level optimization § INTRODUCTION AND PROBLEM DEFINITION In recent years, interdiction games () have received considerable attention in the logistics literature, see, e.g., the surveys <cit.>, <cit.> and <cit.>. involve two decision makers, usually called the leader and the follower who compete in a hierarchical manner. The follower solves an optimization problem defined over a set of assets such as facilities or arcs on a network that the leader can interdict within an interdiction budget. Depending on the concrete setting of the , interdicting an asset can either deprecate its value for the follower or completely destroy the asset making it unusable for the follower. The goal of the leader is to choose the assets to interdict in such a way as to maximize the deterioration of the follower's objective function value. Interdiction games find applications in various areas such as marketing <cit.>, identifying and defending critical infrastructure <cit.>, as well as conservation planning <cit.>. Network interdiction is an important class of , involving the interdiction of some network components at the upper level. Early examples in this area include the interdiction of flows on arcs <cit.>, and the interdiction of shortest paths <cit.>. More recent works address, for example, multi-commodity flow interdiction <cit.>, traveling salesman problem with interdiction <cit.>, and maximum clique interdiction <cit.>. <cit.> provides a comprehensive survey on network interdiction problems. In this work, we consider an interdiction version of the well-known orienteering problem (). In the we are given a graph with prizes on the nodes and lengths on the edges, along with a budget for the overall length of the tour and a depot node. The goal is to find a tour that respects the length budget, passes through the depot, and maximizes the prizes collected (a node prize is collected if the node is part of the tour) <cit.>. As a result, the is a combination of the knapsack problem <cit.> and the traveling salesperson problem <cit.>, and it is also known as the selective traveling salesperson problem <cit.>. This fundamental problem in logistics and transportation is NP-hard and has spawned countless variants and generalizations such as the team <cit.>, with time windows <cit.>, and the stochastic <cit.>. For an overview on the , we refer to the surveys <cit.>. In the interdiction version of the , which we call the orienteering interdiction game (), initially the leader interdicts some of the nodes within a budget. Then, the follower solves an where it is not possible to collect prizes from the interdicted nodes. The aim of the leader is to choose the nodes to interdict in such a way that the maximum possible prize that the follower collects is minimized. A formal definition of is given in the following. We are given an undirected complete graph G=(V,E) with node set V, an edge set E, a prize p_i>0 associated with each node i∈ V, and a length d_e associated with each edge e∈ E. When a node i∈ V is interdicted by the leader, its prize is captured by the leader and cannot be collected by the follower even if node i is visited in the follower tour. Therefore, the leader's goal is to interdict a subset of nodes, whose total cost does not exceed an interdiction budget (Q_ℓ), so it minimizes the maximum profit (that is, the sum of the prizes of the nodes in the tour) that the follower can achieve by performing a tour, starting at the depot ρ_f ∈ V, with a total distance not exceeding a total distance budget B_f. An illustration of the is presented in Figure <ref>, on the instance of TSPLIB95 (<http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/>). The instance involves 29 cities in Bavaria and connections between all city pairs. We arbitrarily determine city 1 as the depot node and consider unit prizes, unit interdiction costs, and three levels of the interdiction budget, i.e., Q_ℓ∈{0,5,8}. Note that with an interdiction budget of zero the obtained problem is just the OP without interdiction. As budget we used B_f = 0.5ν where ν denotes the optimal TSP tour length which is provided with the instances. In the figure, the optimal leader solution consists of the green nodes, and the resulting follower tour is shown in each figure. The total prizes collected by the follower are 16, 12, and 11, respectively. can model several applications, such as preventing competitors from effective canvassing, also known as door knocking, which is considered crucial for political campaigns <cit.>. In such a setting, the leader who wants to minimize the support to the competitor via canvassing and who has limited resources should identify and convince key groups of individuals. It can also be used to identify critical locations for patrolling. While studies including <cit.> focus on the maximization of patrol coverage, in the interdiction version the leader's problem could model an adversary who wants to damage the benefit/coverage of patrolling. Solving this problem reveals important locations whose interdiction undermines the security operations in the worst possible way. Lastly, the leader can model security forces who want to prevent criminal activities of a moving follower. An example is to find the best spots in a touristic district to continuously monitor so that the pickpockets who stroll around are restrained. To the best of authors' knowledge, there is limited work considering interdiction within a routing problem. In Section <ref> we provide a literature review on existing problems and studies in this area. §.§ Contribution and outline The is a two-player Stackelberg game <cit.> and thus it can be modeled as a bilevel optimization problem (BOP). We first formulate the as a BOP and then propose a single-level reformulation based on so-called interdiction cuts to tackle this challenging problem. This technique was introduced in <cit.> for interdiction games fulfilling a certain monotonicity assumption and we show that cuts of this form can also be used for the . Based on this reformulation, we develop a branch-and-cut algorithm to solve the and introduce various enhancements for the algorithm. For solving the lower level problem within this algorithm, we make use of the branch-and-cut ideas proposed by <cit.> for the . The main contributions of our work can be summarized as follows: * We introduce the , which is a competitive version of the well known and can model various applications ranging from security games to campaign planning. * We formulate the as a zero-sum BOP with binary decision variables in both levels. Then, we propose a single-level reformulation of the problem using interdiction cuts. To the best of our knowledge, this method has not been used for any other routing problem yet. * Based on this formulation, we propose an algorithmic framework to solve the exactly via branch-and-cut. This framework includes several components such as cut pools and integrated heuristic procedures which could reduce the computational burden. * We develop a genetic algorithm to heuristically solve the . * We provide a computational study on instances from literature adapted to the to assess the efficacy of our solution algorithms and their ingredients. The paper is organized as follows. In Section <ref> we introduce the BOP formulation of the and propose a single-level reformulation. We then describe a branch-and-cut method together with several enhancement strategies. In Section <ref>, we present our genetic algorithm. We evaluate the performance of our solution approaches in Section <ref>. Finally, we draw conclusions and provide possible future research directions in Section <ref>. §.§ Literature review In this section, we provide an overview of the literature on interdiction games involving routing decisions and on routing problems in a bilevel optimization setting. First, we focus our attention on studies that consider routing interdiction. In one of them, <cit.> address a fortification-interdiction variant of the traveling salesperson problem where the arcs are subject to protection and interdiction. Unprotected arcs can be interdicted by the attacker, which increases the cost of those arcs. They propose an exact iterative algorithm based on sampling of feasible tours. In <cit.> the vehicle routing problem with complete arc interdictions is considered, and a Benders decomposition algorithm is proposed which is capable to solve small size problems. In <cit.>, a hazmat routing interdiction problem is presented in which the leader aims to minimize the risk and the follower minimizes the routing cost. They propose metaheuristic methods to solve it. Several variants of arc interdiction vehicle routing problem are addressed by <cit.> and <cit.>, and are handled via heuristic/metaheuristic methods. To the best of our knowledge, neither node interdictions nor a follower solving an orienteering problem were considered before. Next, we review works addressing a bilevel optimization problem (BOP) involving routing in its lower level, i.e., a more general setting. In BOPs each player has his/her own objectives and constraints and the leader, who acts first, anticipates the optimal follower response (see, e.g., <cit.> for an overview on BOPs). An is a special class of BOPs where the leader and the follower optimize the same objective function in opposite directions, while the leader affects the follower problem via interdictions of his/her assets. Regarding exact approaches for BOPs with a routing component, <cit.> introduce the bilevel profitable tour problem where the leader is the logistics platform assigning orders to carriers and the followers are the carriers solving a profitable tour problem. They develop a branch-and-cut algorithm to solve the problem exactly. Aside from this work which uses an exact algorithm, there are also several works on BOPs with a routing component which propose metaheuristic algorithms to solve the adressed problem: The paper <cit.> addresses vehicle routing problem with backhauls and time windows in a military context. The problem is formulated as a BOP where the goal of the leader is to minimize the number of vehicles, and the follower wants to minimize the total routing cost. <cit.> consider a vehicle routing problem with uncertain travel times. In the proposed bilevel model, the leader minimizes the expected total waiting time, and each follower (vehicle) minimizes its own waiting time. In the production-distribution planning problem considered by <cit.>, the distribution and manufacturing companies act respectively as the leader and the follower of a bilevel model. Each player seeks to minimize their costs. <cit.> study a similar problem in which the distributor has CO_2 emission goals in addition to profit maximization. This leads to a bi-objective leader problem. <cit.> consider the problem of bus stop location and school bus routing, which is modeled as a BOP. Aside from these works, which model competitive settings with multiple agents, sometimes routing problems just involving a single level (i.e., a single decision maker) are also modeled as BOPs: <cit.> formulate the vehicle routing problem as a BOP such that in the first level customers are assigned to vehicles and in the second level optimal routes of these assignments are determined. Similarly, <cit.> model the location routing problem with a BOP formulation whose leader makes the strategic decisions (facility locations) and whose follower makes operational decisions (routes). Both problems are solved via genetic algorithms. <cit.> formulates the capacitated electric vehicle routing problem as a BOP whose upper level involves the routing decisions and the lower level involves determining the charging schedule. They propose an ant colony optimization algorithm. Note that there are generic methods for solving interdiction games under certain assumptions, such as the iterative bounding algorithms in <cit.>, or the branch-and-cut method in <cit.>. Similarly, there exist methods for solving integer bilevel linear programming problems <cit.>. However, due to the inherent difficulty of interdiction games and integer bilevel linear programming problems, these generic methods usually are only capable of solving rather small-sized (generic) instances. For this reason, we design an exact algorithm tailored to the (based on ideas from <cit.>) and also propose a genetic algorithm to heuristically obtain high-quality solutions within a shorter running time compared to our exact approach. § AN INTERDICTION-CUT-BASED EXACT SOLUTION METHOD FOR THE In order to formulate the as a bilevel integer program, we introduce the following notation. Let 𝐳∈{0,1}^|V| be a vector of decision variables, so that z_i = 1 if the leader interdicts node i∈ V, and z_i = 0 otherwise; hence, z_i=1 implies that the leader prevents the follower from getting the prize of node i. Likewise, let 𝐲∈{0,1}^|V| be a vector of decision variables so that y_i = 1 if the follower visits node i∈ V in his/her tour, and y_i = 0 otherwise; additionally, let 𝐱∈{0,1}^|E| be a vector of decision variables so that x_e = 1 if the follower traverses the edge e in the tour, and x_e = 0 otherwise. Let τ(𝐱,𝐲) denote the tour induced by a given pair (𝐱,𝐲); and let 𝒯 be the set of all feasible tours, i.e., cycles that do not contain subtours and that pass through the depot ρ_f, where each node in the tour is visited at most once. Considering this notation, the can be formulated by the following (bilevel) MIP model: ϕ^∗ = min_𝐳∈{0,1}^|V| max_(𝐱,𝐲)∈{0,1}^|E|×|V| ∑_i ∈V p_i(1-z_i)y_i OBJ ∑_i∈V z_i ≤Q_ℓIBUDGET ∑_e∈E d_e x_e ≤B_f DBUDGET τ(𝐱,𝐲) ∈𝒯. TOUR The objective function (<ref>) encodes the (bilevel) minmax optimization goal, which is the minimization of maximum prize collected from the nodes that are visited and non-interdicted. Constraint (<ref>) imposes that the total number of interdictions does not exceed the interdiction budget Q_ℓ. Likewise, the constraint (<ref>) imposes that the total distance of the follower's tour does not exceed the follower's distance budget B_f. Finally, the constraint (<ref>) ensures that the vectors 𝐱 and 𝐲 must induce a feasible tour that includes the depot ρ_f. §.§ A single-level reformulation of the Before we present a single-level reformulation of the , we first formulate constraint (<ref>) using subtour elimination constraints. For a given subset of nodes S⊆ V, let δ(S) = {{i,j}:e∈ E| i∈ S, j∈ V∖ S}; i.e., δ(S) corresponds to the set of edges that are incident to the nodes contained in set S. Using this definition, τ(𝐱,𝐲) ∈𝒯 can be encoded by the following set of constraints: ∑_e∈δ(j)x_e=2y_j, ∀ j∈ V SEC.1 ∑_e∈δ(S)x_e≥ 2 y_j, ∀ j∈ V∖ S, ∀ S⊂ V|_ρ_f ∈ SSEC.2 y_ρ_f=1. SEC.3 Constraints (<ref>) model the fact that if a node j'∈ V is included in the follower's tour (i.e., y_j'=1), then exactly two of its adjacent edges must also be included in the tour (∑_e∈δ(j')x_e = 2). Constraints (<ref>) are the so-called generalized subtour elimination constraints (GSECs) and they ensure that the tour defined by 𝐱 and 𝐲 does not contain subtours. Constraint (<ref>) ensures that the depot ρ_f is included in the follower's tour. Let Φ(𝐳) be the value function of the lower level problem for a given interdiction decision 𝐳, i.e., Φ(𝐳) = max_(𝐱,𝐲)∈{0,1}^|E|× |V| ∑_i ∈ V p_i(1-z_i)y_i . Then, we can write the value function reformulation of the as follows: ϕ^∗ = min_𝐳∈{0,1}^|V| t t ≥Φ(𝐳) (<ref>). Note that the value function reformulation is non-convex, even when the binary restrictions are relaxed, due to the value function constraint t≥Φ(𝐳) which ensures that the objective function value of the upper level is at least as large as the optimal objective function value of the lower level for the selected interdiction decision. However, it can be further reformulated by considering the feasible follower solutions as follows. Let 𝐘 be the set of all vectors ŷ such that there exists a tour τ(𝐱̂,ŷ) ∈𝒯 for some 𝐱̂∈{0,1}^|E|, satisfying the follower's distance budget B_f. Using this notation, we can obtain the following single-level reformulation of the . ϕ^∗ = min_𝐳∈{0,1}^|V| t t ≥∑_i ∈ V p_i(1-z_i)ŷ_i, ∀ŷ∈𝐘ICUT (<ref>) In this formulation, (<ref>) corresponds to the set of interdiction cuts (following the terminology of <cit.>). These cuts model the value function constraint t≥Φ(𝐳). The formulation OIGS models the . We have to show that for any interdiction decision 𝐳 the set of constraints (<ref>) ensures that t will have the value of Φ(𝐳) (taking into account that the objective function forces t=max_ŷ∈𝐘∑_i ∈ V p_i(1- z_i)ŷ_i). Suppose this is not the case. This means there exist an interdiction decision 𝐳̅ for which either Φ(𝐳̅) > ∑_i ∈ V p_i(1-z̅_i)ŷ_i, ∀ŷ∈𝐘 or ∃ŷ∈𝐘: ∑_i ∈ V p_i(1-z̅_i)ŷ_i> Φ(𝐳̅). We proceed by case distinction. * Assume Φ(𝐳̅) > ∑_i ∈ V p_i(1-z̅_i)ŷ_i, ∀ŷ∈𝐘: Let 𝐲^*(𝐳̅) be an optimal solution of Φ(𝐳̅). Clearly 𝐲^*(𝐳̅) ∈𝐘 and ∑_i ∈ V p_i(1-z̅_i) y^*(𝐳̅)_i=Φ(𝐳̅). Thus we arrive at a contradiction to our assumption. * Assume ∃ŷ∈𝐘: ∑_i ∈ V p_i(1-z̅_i)ŷ_i> Φ(𝐳̅): Since the interdiction decisions to not affect the feasibility of the lower level problem, we have that ŷ is feasible for the lower level problem given interdiction decision 𝐳̅. Consequently the value of Φ(𝐳̅) must be at least ∑_i ∈ V p_i(1-z̅_i)ŷ_i. Thus we we arrive at a contradiction to our assumption. We have arrived at a contradiction for both cases, which concludes our proof. §.§ A branch-and-cut algorithm In order to solve the , we propose a branch-and-cut (B&C) algorithm based on OIGS, where we drop (<ref>) and add those constraints on-the-fly. In the remainder, we denote the linear programming (LP) relaxation of any B&C subproblem by , which includes a subset of (<ref>), in addition to (<ref>), and branching decisions made to reach the current B&C node. We note that in order to separate (<ref>) we employ another B&C algorithm within our main B&C algorithm, details are given below. §.§.§ Separation of interdiction cuts In this section, we describe how we separate the inequalities (<ref>) while implementing the B&C algorithm to solve the . Let (t̅,𝐳̅) be a feasible solution to . We need to solve the following separation problem, which is identical to the follower's problem for 𝐳=𝐳̅: Φ(𝐳̅) = max_(𝐱,𝐲)∈{0,1}^|E|× |V| ∑_i ∈ V p_i(1-z̅_i)y_i . If t̅ < Φ(𝐳̅), then we need to add a violated interdiction cut to separate the point (t̅,𝐳̅). Otherwise, t̅ captures the optimal follower objective value correctly, and we treat the current point as a feasible solution to our problem. We enhance the formulation SEP by making use of two classes of valid inequalities for the orienteering problem <cit.>. The first class of inequalities, that we refer to as logical constraints, is given by x_e≤ y_j, ∀ e ∈δ(j), ∀ j ∈ V. Logical Basically, these constraints ensure that if a given node j is not visited by the follower's tour (y_j = 0), then none of the incident edges is part of the tour (x_e = 0, ∀ e ∈δ(j)). The second class of inequalities corresponds to the cycle cover inequalities, which are given by ∑_e∈ E_τx_e≤∑_j∈ V_τ y_j -1, CC for a given tour τ that is encompassed by the nodes in V_τ and by the edges in E_τ, and with a total distance ∑_e∈ E_τd_e larger than B_f. These constraints ensure that if the follower's tour includes the nodes in V_τ, then at least one of the edges in E_τ must be excluded from the tour in order to not violate the corresponding follower's distance budget constraint (<ref>). §.§.§ Separation of follower cuts To solve the separation problem (SEP), we carry out another B&C procedure where we drop the subtour elimination constraints (<ref>) from the initial formulation and add violated ones once they are detected, on-the-fly. In addition, we make use of the valid inequalities described in the previous section. Suppose that we are given the (possibly infeasible) follower solution (𝐱̅, 𝐲̅). In the following, we describe how we separate the inequalities of each type at (𝐱̅, 𝐲̅). Generalized subtour elimination constraints. These inequalities are obtained using the maximum flow-based approach proposed by <cit.>. In this approach, given (𝐱̅, 𝐲̅), first the so-called support graph is first obtained which contains the nodes and edges of the original graph such that x̅_e>0 and y̅_j>0, respectively. On this graph, a minimum capacity cut separating the depot and any non-depot node leads to a (possibly violated) GSEC. The nodes are considered one by one, according to the non-decreasing order of y̅_j and then the maximum flow computations are done. We refer the reader to <cit.> for further details. Logical constraints. These inequalities are obtained by complete enumeration of node-edge pairs such that x̅_e >y_j where e∈δ(j), which can be done in O(|E|) time. Cycle cover inequalities. These inequalities are obtained via the heuristic procedure in <cit.>. This heuristic takes 𝐱̅ as input and computes a maximum-weight spanning tree. Then for each edge e that is not in the tree, if adding e creates a cycle that passes through ρ_f, then the violation of the cut for the resulting tour is checked. At every (follower) B&C node with a fractional solution (𝐱̅,𝐲̅), we first look for all possible violated valid inequalities (<ref>). If none is found, then we try to find violated inequalities (<ref>). If violated cuts are not identified, then violated (<ref>) are tried to be obtained. In case there are no violated inequalities of these three types, no action is needed. For integer feasible (𝐱̅,𝐲̅), we only look for violated inequalities of type (<ref>). If there is none, (𝐱̅,𝐲̅) is a feasible solution to SEP and it replaces the incumbent solution if it is a better one. §.§ Enhancement strategies While solving the with the B&C approach that we propose, we make use of some strategies that could help to speed up the algorithm, in particular the separation procedure. Below we describe the algorithm components developed to this end. In our computational experiments, we try different combinations of these enhancement strategies. The details of these combinations including the values used for the parameters which occur in some of the strategies described in the following are given in Section <ref>. Separation problem objective lower bound (lower cutoff). For a given leader solution (t̅,𝐳̅) that is the optimal solution of the current B&C subproblem, the optimal objective value Φ(𝐳̅) of the separation problem (SEP) cannot be less than t̅ since the interdiction cuts underestimate the follower objective value. Therefore, we can set a lower cutoff value t̅ on the objective function value while solving (SEP), for more efficient pruning of nodes in the B&C tree of (SEP). Cut pool. Every time (SEP) is solved, inequalities of type (<ref>) and (<ref>) are generated. We keep a pool of previously obtained inequalities of these types to be used in later attempts to solve the separation problem. Based on preliminary experiments, instead of including all the inequalities of the cut pool in the initial formulation of (SEP), we iterate over the pool and add the violated ones at the root node of the B&C tree, after each time the node subproblem is solved. Solution pool. Every time (SEP) is solved for a given leader solution 𝐳̅, which yields a feasible solution (𝐱̅,𝐲̅), we retrieve the obtained tour c = τ(𝐱̅,𝐲̅) and add it to a (solution) pool denoted by 𝒞. These solutions are utilized to generate new feasible follower solutions that possibly yield violated interdiction cuts, as described in the following paragraph. We denote by V_c={j∈ V: y̅_j=1} and E_c={e∈ E: x̅_e=1} the sets of nodes and edges, respectively, associated to tour c. Heuristic follower solutions. For the correctness of the overall B&C algorithm, it is necessary to have an exact separation method for integer leader solutions, i.e., a method that returns a violated interdiction cut when in the current solution (t̅,𝐳̅), t̅ strictly underestimates the follower objective value for 𝐳̅. On the other hand, it may be possible to separate fractional solutions as well which could improve the dual bound faster. Since it is usually costly to carry out an exact method for every fractional solution encountered during B&C, the common approach is to use a heuristic separation algorithm that can yield violated cuts <cit.>. To this end, we propose a heuristic separation scheme that can be used for fractional and integer solutions, where we iterate over the solution pool 𝒞. As described in Algorithm <ref>, for each tour c in the pool, we first repair it by removing the nodes whose prizes cannot be collected under the current interdiction strategy 𝐳̅, unless this makes the path longer. Then, we apply and operations until the objective value cannot be improved, or the prize threshold is exceeded. The latter means that we are able to find a follower solution yielding a violated cut. Follower preprocessing. Given a leader solution 𝐳̅, let i^' be a node selected by the leader, i.e., z̅_i^' = 1. If there exists no pair a,b∈ V such that d_ai^'+d_i^' b< d_ab, it is safe to assume that the follower would not visit i^' in an optimal tour since its prize is not available to the follower anymore. In this case, we can fix y_i^' = 0 in SEP. Otherwise, one of the following conditions could hold in an optimal follower tour: (i) i^' is not visited, (ii) i^' is visited between a node pair a and b where d_a i^'+d_i^' b< d_ab, or (iii) i^' is visited between a node pair a and b where d_a i^'+d_i^' b≥ d_ab, i.e., visited although it does not make the tour the shorter. In the last case, removing i^' would not compromise the optimality of the follower's tour since prize p_i^' is not collected anyway. So, exactly one of the following conditions must hold: (y_i^'=0), (x_a_1 i^'+x_i^' b_1=2), … ,(x_a_k i^'+x_i^' b_k=2), where (a_k,b_k) are all pairs with d_a_k i^'+d_i^' b_k< d_a_k b_k. This result can be used to strengthen the formulation (SEP) with additional constraints. The details are provided in Algorithm <ref>. § A GENETIC ALGORITHM FOR THE In this section, we propose a genetic algorithm to solve the . In a genetic algorithm, it is often helpful if the fitness value of an individual z^' of the population would be equal to the objective function value. For the this would mean that the fitness value should be Φ(z^'), which is the optimal objective value of the OP under the interdiction decision z^'. Hence, to evaluate the fitness of an individual, we would need to solve an NP-hard problem. Thus, in the design of our algorithm, for better time efficiency, we opt for a faster and heuristic method to calculate the fitness values. Procedure is very similar to Algorithm <ref> and uses a follower solution pool to generate a good follower solution for the current interdiction strategy. Unlike Algorithm <ref> it does not take a target objective as input, but instead iterates over all pool solutions and returns the best new solution obtained and its total prize. Note that the resulting pair of leader and follower solutions may not be bilevel feasible as the follower tour that outputs is only a heuristic solution for the given interdiction strategy (i.e., leader solution), and to be feasible, it needs to be an optimal solution for the given leader solution. However, this is not a problem since we use the fitness value only to lead our search towards a better leader solution. To ensure bilevel feasibility of the final solution, a post-processing step is carried out to obtain the optimal follower tour. The pseudocode of the genetic algorithm is provided as Algorithm <ref>. It starts with the generation of an initial set of follower solutions 𝒞 to be used within in later steps. In this step, we use the heuristic algorithm in <cit.> to solve the OP, which takes the optimal solution of the LP relaxation of the OP at the current B&C node, generates a feasible tour in the first stage, and improves it in the second stage. We refer the interested reader to <cit.> for further details. In our case, we implement this method k_0 times, i.e., we solve the LP relaxation of the follower problem for k_0 randomly generated feasible 𝐳 values. For each of them, once we get a feasible tour and improve it as described by <cit.>, we remove, in turn, one of the tour nodes and reapply the improvement procedure. We add the resulting tour to 𝒞 if it is not obtained in the previous iterations. After initializing 𝒞, we initialize the population with p_0 interdiction strategies obtained with a randomized greedy algorithm . It starts with the computation of initial marginal gains, i.e., an estimate of the decrease in the follower objective, due to interdicting each node in V and storing them in a sorted list in decreasing order of gains. Then we iterate in a lazy fashion and pick the first node in the list. If its gain is not updated after the last interdiction decision, we recompute it with probability 1-p_s and re-sort the list. With the skipping probability p_s we remove the node from the list. If the first node in the list has a newly computed gain value, we choose it to be the next node to interdict. We iterate until the interdiction budget Q_ℓ is reached or the list is empty. For the computation of marginal gains at any stage of we use . Due to the skipping probability, at each call to we obtain a different interdiction strategy, i.e., an individual. The selection of the parents is made according to a K-way tournament selection. For each of the two parents, K individuals are randomly chosen and the fittest one is selected as a parent. A one-point crossover operator () is applied to the parents to generate a single offspring. Then, the mutation operator () applies zero, one, or two random bit flips with equal probabilities. If the offspring does not represent a feasible leader solution, we repair it by switching ones to zeros at the bits with smallest prize until the budget constraint is satisfied, which is denoted by (). The fitness of the offspring is then computed via (). We use a steady-state (incremental) population structure, where an offspring replaces the individual with the worst fitness value immediately if the maximum population size p_max is reached. The accuracy of depends on the quality and diversity of the solutions in 𝒞. Therefore, every time we call this procedure, we add the resulting tour to 𝒞 considering an upper bound on the size of the pool. Since it is dynamically updated, the fitness value of an individual may change and get closer to the true value when it is re-calculated after some iterations. To better estimate the true objectives, we re-calculate the fitness values of the best k individuals at every 10 iterations. Similarly, once the iterations are over, we re-calculate the fitness of each individual to determine the fittest one. Finally, to have a bilevel feasible solution, we solve the follower problem optimally for the selected leader solution corresponding to the fittest individual. § COMPUTATIONAL RESULTS AND DISCUSSION All the algorithms we propose are implemented in C++. Whenever we need to solve some MIPs or LPs they are solved via IBM ILOG CPLEX 12.10 at the default settings. We make use of CPLEX callbacks to implement our B&C algorithm (including the B&C algorithm to solve SEP). During our experiments, we used the single core of an Intel Xeon E5-2670v2 machine with 2.5 GHz processor and 3GB of RAM. We set a time limit of one hour for all of our experiments. §.§ Description of the instances Our data set consists of a subset of the symmetric traveling salesman problem instances available at TSPLIB (<http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/>). Among these 111 available instances, we select 38 for which the single-level OP can be solved optimally in less than 15 minutes in our environment, as our test instances. For determining the node prizes, we consider two options: unit prizes, denoted by u; and pseudorandom prizes, denoted by r. For random prizes, we follow the approach in <cit.> and generate the values according to the equation p_i=1+(7141i+73)100. Two interdiction budget levels Q_l ∈{5,8} are considered. The distance budget is determined as B_f=0.5 ν where ν denotes the optimal TSP tour length which is provided with the instances. The resulting instance set is available at <https://msinnl.github.io/pages/instancescodes.html>. §.§ Results of the B&C algorithm In our experiments with the B&C algorithm, we consider the following algorithmic settings that are obtained by including a subset of the enhancement strategies we propose in Section <ref>: * I: Only integer solutions are separated, in an exact way. * IF: Both integer (exact) and fractional (heuristic) solutions are separated. * IFH: Both integer (heuristic + exact) and fractional (heuristic) solutions are separated. * IFHC: In addition to IFH we keep a (follower) cut pool. * IFHCP: In addition to IFHC we apply . The lower cutoff strategy is applied in all settings by default. The follower solution pool 𝒞 is used in all settings except I, where we do not generate heuristic follower solutions. In setting IFH, while separating integer solutions, we first try the heuristic method shown in Algorithm <ref>. If it does not yield a violated cut, then we solve the follower problem optimally to obtain a violated cut if there exists one (exact separation). Note that whenever we keep a (follower) cut pool, its size is bounded by 5000 cuts in total. In the experiments with fractional separation, at any (leader) B&C node with a fractional node solution at most 10 passes of cut generation are allowed. In Table <ref> and Table <ref> we show some numerical results of our experiments, as averages over the instances with the same leader budget Q_ℓ. In the columns we show the algorithmic setting, total running time in seconds (t(s.)), time to generate the interdiction cuts including the time to solve (SEP) (t_SEP(s.)), the optimality gap at the end of time limit (Gap(%)), the optimality gap at the root node (rGap(%)), the number of optimally solved instances out of all 38 (nOpt), the number of B&C nodes generated (nBBnode), the number of interdiction cuts at integer solutions (intCuts), and the number of interdiction cuts at fractional solutions (fracCuts). The numbers are averages over 38 instances. The results indicate that separating fractional solutions is very effective in decreasing the solution time and tree size. For the unit-prize instances, separating integer solutions in a heuristic way also seems to significantly decrease the solution time, through a reduced number of times we need to solve SEP. This is not the case for the random-prize instances, which we explain by the poor quality heuristic solutions. Solving SEP usually cannot be avoided because of failing to find a violated cut via the heuristic, even though there exists one. Keeping a follower cut pool on the other hand, is effective under both prize choices. It reduces the overall solution time by reducing the time spent to solve SEP. Lastly, brings some improvement in terms of solution time of the random-prize instances, although its marginal contribution is not as large as of the other components. Figure <ref> shows the cumulative distribution of the running times of all instances under different algorithmic settings. While we are able to solve 44% of the instances optimally in one hour under the basic setting I, this ratio is reached in two minutes under the setting IFHCP which is the best performer in terms of run time. §.§ Results of the genetic algorithm For our genetic algorithm, we consider the following parameter values, which we determined in preliminary tests: Before the main iterations we initialize the follower solution pool 𝒞 by applying the heuristic of <cit.> k_0=10 times. An initial population of p_0=20 individuals is created via using a skipping probability p_s=0.4. At every iteration 3-way tournament is applied to choose the parents. We allow a maximum population size of p_max=100, and limit the size of 𝒞 by 2000 as the complexity of increases with it. Every 10 iterations the objective value of the fittest individual is re-evaluated, which is likely to change its rank. The maximum number of iterations is n_maxIter=5000. In Table <ref>, we compare the B&C results with the results of the genetic algorithm given in Algorithm <ref>. The table displays the results of each instance, in terms of run time in seconds (t(s.)), the best objective value obtained via Algorithm <ref> (z_GA), the best objective value obtained via the B&C method (z_BC), and their relative difference Δ = 100(z_GA-z_BC)/z_BC. Since not all z_BC values are optimal objective values, Δ shows how much the heuristic solution objective value is far away from the best primal bound we have. The average solution time of the genetic algorithm is less than three minutes over all instances, whereas it finds solutions which are only 2.46% away from the B&C objective values, on the average. The maximum deviation from z_BC is 10.42% for unit-prize instances and 13.29% for random-prize instances. These results indicate that our genetic algorithm performs well in case we need to obtain a high-quality feasible solution in short time. § CONCLUSIONS AND FUTURE WORK In this work, we propose the orienteering interdiction game where two players (leader and follower) compete in a hierarchical manner: The follower tries to maximize the total profit collected by visiting nodes (i.e., the sum of the prizes of the visited nodes) and the leader wants to minimize this amount by interdicting nodes (if a node is interdicted, the follower does not gain the prize of the node when visiting it). Such a setting may be encountered during political campaign planning to damage the effectiveness of the competitor's canvassing, in security operations where the routing-based activities of an adversary agent is prevented, or in the analysis of worst-case scenarios of the attacks towards patrolling security forces. This zero-sum Stackelberg game can be modeled as a bilevel optimization problem and further reformulated as a single-level problem adapting the so-called interdiction cuts which were introduced in <cit.> for interdiction games fulfilling a certain monotonicity assumption. We propose such a reformulation and develop a branch-and-cut algorithm to solve it exactly. In addition, we develop a genetic algorithm in which the fitness value of an individual is estimated heuristically using a solution pool. We conduct a computational study by creating instances of using a set of TSP instances in the literature. The results show that the performance of the branch-and-cut method can be drastically improved by means of the proposed enhancement strategies. The genetic algorithm yields solutions that are similar to the B&C solutions in terms of the objective function value, in reasonable time. There are various avenues for further work: It could be interesting to try to design other exact solution algorithms, which do not use a reformulation based on interdiction cuts. Moreover, the development of other heuristic algorithms could also be a fruitful direction for further work. In particular, the fact that obtaining the objective function value of any feasible solution requires the solution of the NP-hard orienteering problem presents an intriguing challenge in this context. Moreover, the could be extended by adding (topological) constraints to the leader problem, e.g., it could be imposed that the leader also needs to solve an orienteering problem, and the nodes visited by the leader in her or his tour are the nodes which are then interdicted for the follower. Finally, it could also be interesting to consider other orienteering problems, such as the team orienteering problem, the orienteering problem with time windows, and the multi-period orienteering problem in a similar game-theoretic setting. Acknowledgments E. Álvarez-Miranda acknowledges the support of the National Agency of Research and Development (ANID), Chile, through the grant FONDECYT N.1180670 and through the Complex Engineering Systems Institute ANID PIA/BASAL AFB180003. elsarticle-harv
http://arxiv.org/abs/2407.02640v1
20240702201327
Subpath-Based Column Generation for the Electric Routing-Scheduling Problem
[ "Alexandre Jacquillat", "Sean Lo" ]
math.OC
[ "math.OC", "90C39 (Primary) 90C11, 90B06 (Secondary)" ]
group-separator=,, group-minimum-digits=4, arrows.meta shapes.geometric,fit positioning,calc,shapes.misc decorations.pathreplacing ØO R Z Q E P A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l n p q r s t u v w x y z 0 1 h α β γ δ λ ν σ μ θ Σ φ π ξ ρ ϕ ζ ι χ ℓ κ τ Δx conv dim vol cone proj s.t. rank ŋng ⌈⌉ ⌊⌋ [, ](),a, 24pt and skipabove=22pt,skipbelow=22pt [ topline=true, bottomline=true, rightline=true, leftline=true, innertopmargin=11pt, linewidth=.75pt ]exbox box_example Jacquillat and Lo Subpath-Based Column Generation for the Electric Routing-Scheduling Problem Subpath-Based Column Generation for the Electric Routing-Scheduling Problem Alexandre Jacquillat and Sean Lo Operations Research Center and Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA. Motivated by widespread electrification targets, this paper studies an electric routing-scheduling problem (ERSP) that jointly optimizes routing-scheduling and charging decisions. The ERSP is formulated as a semi-infinite set-partitioning model, where continuous charging decisions result in infinitely-many path-based variables. To solve it, we develop a column generation algorithm with a bi-level label-setting algorithm to decompose the pricing problem into (i) a first-level procedure to generate subpaths between charging stations, and (ii) a second-level procedure to combine subpaths into paths. We formalize subpath-based domination properties to establish the finite convergence and exactness of the column generation algorithm. We prove that the methodology can handle modeling extensions with heterogeneous charging costs (via dynamic re-optimization of charging decisions) and algorithm extensions to tighten the relaxation using ng-routes and limited-memory subset-row inequalities (via augmented domination criteria). Computational results show that the methodology scales to large instances, outperforming state-of-the-art column generation algorithms. From a practical standpoint, the methodology achieves significant cost reductions by jointly optimizing routing-scheduling and charging decisions and by capturing heterogeneous charging costs. vehicle routing, scheduling, sustainable operations, column generation, dynamic programming Local Synchronization of Power System Devices Ignacio Ponce, Student Member, IEEE, and Federico Milano, Fellow, IEEE I. Ponce and F. Milano are with School of Electrical and Electronic Engineering, University College Dublin, Belfield Campus, D04V1W8, Ireland. e-mails: , This work is supported by the Sustainable Energy Authority of Ireland (SEAI) by funding I. Ponce and F. Milano under project FRESLIPS, Grant No. RDD/00681. January 2022 ===================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The climate change mitigation targets set by the <cit.> call for widespread electrification of the economy. The share of electricity in energy use is projected to rise from 20% to nearly 30% by 2030 due to the deployment of technologies such as electric vehicles, industrial robots and heat pumps <cit.>. From a business perspective, electrification can mitigate the reliance on high-cost energy sources, but added acquisition costs and reduced asset utilization due to charging requirements can also hinder adoption—especially in low-margin industries. Thus, large-scale electrification requires dedicated analytics and optimization tools to efficiently and reliably deploy electrified technologies into operating systems and processes. As part of this overarching challenge, this paper studies an electric routing-scheduling problem (ERSP) to manage a fleet of electrified machines that consume battery while performing tasks and can recharge in-between. The ERSP jointly optimizes routing-scheduling decisions (i.e., the sequence of tasks for each machine) and charging decisions (i.e., where, when, and for how long to charge). We consider a general modeling framework that can capture spatially distributed operations, heterogeneous setup and switching costs, heterogeneous charging costs, and non-linear battery consumption. This framework includes the following motivating examples: [Logistics] Transportation and logistics are responsible for 25–30% of greenhouse gas emissions. Electric powertrains in medium- and heavy-duty trucking represent important near-term decarbonization opportunities <cit.>. The ERSP encapsulates the electric vehicle routing problem <cit.>, but also augments the literature by capturing heterogeneous charging costs—an important feature in practice <cit.>. [UAV] Unmanned aerial vehicles (UAV) have unlocked new applications in agriculture, defense, wildfire suppression, humanitarian logistics, etc. <cit.>. The ERSP optimizes the management of an electrified UAV fleet in mission-critical environments. [Robotics] Robotic process automation is transforming working activities, for instance in building security, manufacturing, and industrial cleaning <cit.>. Again, the ERSP can be used to support task assignment in electrified robotic operations. Across these applications, the ERSP combines a routing-scheduling layer and a charging layer. Routing-scheduling decisions aim to minimize operating costs subject to completion requirements; for instance, they can capture travel costs in spatially distributed routing environments, as well as setup and switching costs in machine scheduling environments. Charging decisions aim to minimize charging costs subject to battery requirements, with flexibility regarding when, where, and for how long to charge. For instance, consider a machine with a battery of 100 units, performing 10 tasks consuming 25 units each; Figures <ref> shows three feasible sequences of when and by how much to recharge, for the same sequence of tasks. Altogether, the ERSP exhibits a challenging optimization structure coupling discrete routing-scheduling dynamics with continuous charging dynamics. We formulate the ERSP via a set-partitioning model. The model assigns each machine to a path, which encapsulates a sequence of tasks and charging decisions. In traditional routing-scheduling problems, this formulation leads to an exponential number of path-based variables, and is therefore often solved via column generation. In the ERSP, the continuous charging actions lead to an infinite number of path-based variables. The first and third examples in Figure <ref> show solutions with the same routing-scheduling and charging sequence: one charges the machine with 100 units after four tasks and 50 units after six tasks; and other one charges it with 50 units after four tasks and 100 units after six tasks. In fact, infinitely-many combinations exist in-between to maintain a non-negative battery level throughout, such as the second example in Figure <ref>. This problem, in turn, creates a semi-infinite integer optimization structure—a challenging class of problems for which traditional column generation algorithms do not guarantee exactness and finite convergence. The main contribution of this paper is to develop an exact, finite and scalable column generation algorithm that yields provably high-quality ERSP solutions in manageable computational times. Column generation iterates between a master problem that generates a feasible solution based on a subset of plan-based variables, and a pricing problem that identifies new variables with negative reduced cost or proves that none exists. In the ERSP, the pricing problem seeks a sequence of tasks and charging decisions, which is an NP-hard elementary resource-constrained shortest path problem <cit.>. It is typically modelled as a large dynamic program, and solved via label-setting algorithms with dedicated resources handling the continuous charging decisions (see Section <ref>). Instead, we develop a bi-level label-setting algorithm that first generates subpaths, defined as sequences of routing-scheduling decisions between charging actions, and that combines subpaths into paths by optimizing charging decisions in-between. By decomposing the pricing problem into smaller dynamic programs, we separate discrete routing-scheduling dynamics from continuous charging dynamics. As we shall establish, this approach improves the scalability of the algorithm, and provides greater flexibility in modeling heterogeneous charging costs. Specifically, the methodology relies on three main components to decompose the pricing problem: * A bi-level label-setting algorithm: We propose a bi-level decomposition that first extends subpaths along edges between charging stations, and that extends sequences of subpaths into paths while optimizing charging decisions in-between. The algorithm relies on two novel elements: (i) dedicated subpath-based domination properties to prune dominated solutions throughout the algorithm; and (ii) a dynamic rebalancing procedure and dedicated domination criteria to handle heterogeneous charging costs. We prove that this algorithm returns path-based variables of negative reduced cost or guarantees that none exists. * A finite and exact decomposition: We prove that the column generation algorithm, armed with the bi-level label-setting algorithm for the pricing problem, yields an optimal relaxation solution in a finite number of iterations, despite the semi-infinite optimization structure of the ERSP. This result is enabled by the separation of routing-scheduling and charging decisions in the bi-level label-setting procedure. * Tighter relaxations: We leverage adaptive ng-relaxations to eliminate non-elementary paths that visit a customer multiple times <cit.> and limited-memory subset-row inequalities (lm-SRIs) to eliminate fractional solutions <cit.>. Both methods rely on “local memory” that complicate domination patterns when extending subpaths into paths. In response, we augment our bi-level label-setting algorithm with dedicated forward and backward domination criteria. We prove that the algorithm satisfies our domination properties, and therefore that the column generation methodology returns tighter ERSP relaxations with the same guarantees of exactness and finite convergence. Through extensive computational experiments, this paper demonstrates the scalability of the optimization methodology to otherwise-intractable ERSP instances. We find that bi-level label-setting algorithm provides 50%–90% speedups against the path-based benchmark from <cit.>. These improvements are most pronounced in regimes where machines need to perform many tasks but need to be recharged several times in between (i.e., each subpath spans several tasks and each path combines several subpaths). Furthermore, the augmented algorithm with adaptive ng-relaxations and lm-SRI cuts return much stronger relaxation bounds in manageable computational times. Thus, the algorithm scales to instances with up to 40 tasks and 10 charging stations, with integrality gaps around 1-3%. From a practical standpoint, the methodology can result in significant benefits by jointly optimizing routing-scheduling and charging decisions—with up to 8% cost reduction against business-as-usual operations—and by capturing heterogeneous charging costs—with a 5–20% improvement against existing methods based on homogeneous charging costs. Ultimately, the methodology developed in this paper outperforms state-of-the-art approaches for electrified routing-scheduling optimization, and provides the first solution approach to handle heterogeneous charging costs. As such, this paper can contribute to more sustainable operations across industrial domains by easing barriers to adoption toward large-scale electrification. § LITERATURE REVIEW This paper contributes to the literature on electrified transportation and logistics. One body of work deals with the strategic problem of locating charging stations based on users' routing choices <cit.>, traffic congestion <cit.>, car-sharing <cit.>, interactions with electricity markets <cit.>, and battery swapping <cit.>. <cit.> considered the similar problem of locating refuelling stations for hydrogen vehicles. Another branch optimizes routing operations for a single vehicle, given the availability of charging stations <cit.>, speed-dependent operations <cit.>, or queuing at capacitated charging stations <cit.>. In-between, our paper falls into the literature on multi-vehicle electrified routing operations. Within the vehicle routing literature, canonical problems include routing with time windows <cit.> and capacitated vehicles <cit.>. Both link discrete routing decisions and continuous timing/load decisions, but the continuous dynamics are fully determined by discrete routing decisions. In contrast, the electric vehicle routing problem (EVRP) features an extra degree of freedom to determine where, when and for how long to charge each vehicle (see Figure <ref>). <cit.> solved the EVRP using clustering-based heuristics. <cit.> considered the EVRP with time windows, under the restriction that all vehicles charge to full. Heuristics were developed for EVRP variants with speed-dependent battery consumption and nonlinear charging functions <cit.>. Other models included capacitated charging stations <cit.>, public transit <cit.>, and dial-a-ride <cit.>. Exact methodologies for the EVRP rely on set-partitioning formulations along with column generation algorithms. To generate path-based variables, the pricing problem features an elementary resource-constrained shortest-path structure, and is typically solved by label-setting algorithms with dedicated domination criteria to encode charging decisions. For instance, <cit.> proposed labels for an EVRP variant with time windows; <cit.> used labels to model the effective range of vehicles under battery-swapping operations; and <cit.> used labels modeling vehicles' state of charge between customer visits. Our problem differs from these studies in two ways. First, motivated by long-range electrified logistics operations and other electrified applications, we do not impose time windows. This setting limits the extent of pruning in the label-setting algorithms from <cit.> and <cit.>. Second, we incorporate charging costs into the model, and this paper provides the first exact methodology for electric routing with heterogeneous charging costs. These distinctions motivate our bi-level label-setting algorithm to decompose the overall (path-based) pricing problem into smaller (subpath-based) components. The main decomposition method in label-setting algorithms relies on bi-directional schemes that extend paths forward (from the source) and backward (from the sink) until they meet “in the middle” <cit.>. In contrast, our first-level procedure generates subpaths independently, and our second-level procedure combines them into paths. In particular, we formalize new subpath-based domination properties to guarantee exactness and finite convergence, and we propose new domination criteria to handle heterogeneous costs, ng-relaxations, and lm-SRI cuts. Interestingly, even though our label-setting algorithm is uni-directional, some of these new domination criteria require forward and backward labels to ensure the propagation of domination patterns across subpaths. Finally, the subpath-based decomposition relates to subpath-based extended formulations in combinatorial optimization. In pickup-and-delivery or dial-a-ride, <cit.> and <cit.> optimized over subpaths encapsulating sequences of pickups and dropoffs from a point where the vehicle is empty to the next one; <cit.> optimized over subpaths encapsulating sequences of consecutive pickups or consecutive dropoffs. Recent papers applied column generation to generate subpath-based variables dynamically <cit.>. In contrast, our methodology still relies on a path-based formulation but further decomposes the pricing problem into subpaths. In other words, rather than generating subpaths on a subpath-based formulation, our approach generates subpaths on a path-based formulation. This new column generation structure requires an extra step to combine subpaths into full paths, leading to our bi-level label-setting algorithm. § THE ELECTRIC ROUTING-SCHEDULING PROBLEM (ERSP) §.§ Problem Statement and Formulation We consider a fleet of K electric machines that consume battery while performing tasks, and can recharge in between. We represent operations in a directed graph (, ). Nodes are partitioned into set of depots _D, a set of tasks _T, and a set of charging stations _R, so that = _T ∪_D ∪_R. Each machine starts in a depot in _D with full charge, performs tasks in _T, recharges in charging stations in _R, and ends in a depot. We impose a minimum number of machines v^end_j ending in each depot j∈_D. Each arc (i, j) ∈ involves a time t(i,j)>0, a cost c(i,j)>0, and a battery utilization b(i,j)>0, all of which satisfy the triangular inequality. The ERSP seeks a schedule for each machine to minimize operating costs, comprising traveling and charging costs, while ensuring that all tasks get performed within a planning horizon T. We make the following assumptions: – All machines are homogeneous, with the same battery capacity B, the same travel costs, the same charging dynamics and charging costs, and the same battery depletion dynamics. – Battery charging dynamics are linear. The charging cost per unit of time is denoted by δ(i)>0 at charging station i∈_R. Through appropriate scaling, a charging time τ increases the state of charge by τ at a cost δ(i) ·τ. In contrast, battery depletion patterns can be non-linear. – Charging stations are uncapacitated. Importantly, our model can capture heterogeneous charging costs, by letting δ(i) vary across charging stations i∈_R. In the logistics example, charging costs vary based on the location of the charging station, its ownership structure, and electricity grid operations <cit.>. As we shall see, heterogeneous charging costs impose significant complexities to the problem, so we define two variants with homogeneous and heterogeneous charging costs, referred to as ERSP-Hom and ERSP-Het respectively. We refer to ERSP for all arguments that apply to both. The core complexity of the ERSP is to maintain appropriate charge to power all tasks. This could be achieved in integer optimization by linking binary routing variables with continuous charge variables via “big-M” coupling constraints. However, such formulations induce weak linear relaxations, hindering the scalability of branch-and-cut algorithms. Instead, we define a path-based ESRP formulation using Dantzig-Wolfe decomposition principles. Definition <ref> formalizes a path as a feasible combination of routing-scheduling and charging decisions for a machine. A path p is defined by: [(i)] * a node sequence U(p)={n_0, n_1, n_2, …, n_m} such that (n_0, n_1), (n_1, n_2), …, (n_m-1, n_m) ∈, n_0 ∈_D, n_1, …, n_m-1∈_T ∪_R, and n_m ∈_D; and * a sequence of charging times C(p) = τ_k ≥ 0 | k ∈{1,⋯,m-1}, n_k ∈_R. The parameter ip captures the number of times task i∈_T is performed on path p: ip = | k ∈0, …, m | n_k = i |. For k=0,⋯,m, the path p reaches node n_k at time t_k and charge b_k, defined recursively as follows: t_0=0 and, for all k∈{1,⋯,m}: t_k = t_k-1 + τ_k-1 + t(n_k-1, n_k) if n_k-1∈_R t_k-1 + t(n_k-1, n_k) otherwise. b_0 = B and, for all k∈{1,⋯,m}: b_k = min{ b_k-1 + τ_k-1, B } - b(n_k-1, n_k) if n_k-1∈_R b_k-1 - b(n_k-1, n_k) otherwise. Path p is feasible if t_k ∈ [0, T] and b_k ∈ [0, B] for k=1,⋯,m. Its starting and ending node-time-charge triples are (p, p, p) = (n_0, 0, B) and (p, p, p) = (n_m, t_m, b_m). Its cost is: c^p = ∑_ℓ=0^m-1( c(n_ℓ, n_ℓ+1) + n_ℓ∈_R·δ(n_ℓ) ·τ_ℓ) We define an integer decision variable z^p tracking the number of machines assigned to path p∈. The ERSP minimizes costs (Equation (<ref>)) while enforcing machines' starting and ending locations (Equations (<ref>) and (<ref>)) and task requirements (Equation (<ref>)). We refer to it as (), to its optimum as (), to its linear relaxation as (), and to its linear bound as (). min ∑_p ∈ c^p z^p ∑_p ∈p=j z^p = v^start_j ∀ j ∈_D ∑_p ∈p=j z^p ≥ v^end_j ∀ j ∈_D ∑_p ∈ip z^p = 1 ∀ i ∈_T z^p ∈_+, ∀ p ∈; p ∈ | z^p > 0 finite Note that there exist an infinite number of candidate paths due to the combination of discrete routing-scheduling decisions and continuous charging decisions. Thus, the ERSP formulation exhibits a semi-infinite integer optimization structure—a notoriously challenging class of problems. The formulation restricts the solution to a finite support for the integer variables {z^p| p∈} to ensure that () remains well-defined <cit.>. Per Equation (<ref>), each task needs to be performed exactly once. Due to the triangular inequality, the formulation can be restricted to elementary paths, formalized in Definition <ref>. Proposition <ref> shows that this restriction does not alter the integer optimization formulation but tightens its relaxation. This observation will carry great importance in our methodology. A path p∈ is elementary if ip≤ 1 for all tasks i∈_T. We store all feasible paths in and all elementary paths in ⊆. For any path set with ⊆⊆, the following holds: () ≤() ≤() ≤() = () = () §.§ Roadmap Toward an Exact and Finite Column Generation Algorithm To solve the () relaxation, column generation iterates between a master problem that generates a feasible solution based on a subset of path-based variables (stored in _ℓ at iteration ℓ), and a pricing problem that generates a set of variables with negative reduced cost or proves that none exists (Algorithm <ref>). For any path p∈, the reduced cost of variable z^p is p := c^p - ∑_j ∈_Dp=jκ_j - ∑_j ∈_Dp=jμ_j - ∑_i ∈_Tipν_i, where , , and denote the dual variables associated with Equations (<ref>), (<ref>) and (<ref>), respectively. As mentioned earlier, a generic column generation scheme faces three complexities in the ERSP, which will lead to the three main contributions of our methodology: * Pricing problem: Column generation hinges on an efficient pricing algorithm (Step 2). We propose a bi-level label-setting algorithm that (i) generates subpaths capturing task sequences between charging decisions and (ii) combines subpaths into full paths (Sections <ref> and <ref>). * Finite convergence and exactness of Algorithm <ref>: In traditional problems with finitely many variables, column generation is guaranteed to terminate in a finite number of iterations and to return the optimal relaxation solution. Due to the semi-infinite structure of the ERSP, however, column generation is not guaranteed to terminate finitely; moreover, upon termination, the solution is not guaranteed to be optimal if the formulation does not satisfy strong duality. We establish the finite convergence and exactness of the algorithm in Section <ref>. * Relaxation strength: We show that adaptive ng-routes <cit.> and lm-SRI cuts <cit.> can be accommodated in our two-level label-setting algorithm via dedicated forward and backward domination criteria. Both of these extensions contribute to tightening the relaxation of the ERSP. Upon termination, our algorithm returns an optimal solution of the () relaxation; we then retrieve a feasible solution to () by restoring integrality in the master problem. In case this approach does not generate an optimal integral () solution, the algorithm can be embedded into a branch-and-price-and-cut scheme <cit.>. Notably, <cit.> branches on the number of paths, the number of charging actions, the number of stops at each charging station, and arc flows. All of these branching criteria can be handled in our framework by adding inequalities or removing arcs. Nonetheless, our computational results yield provably high-quality solutions upon termination, so we do not implement branch-and-price in this paper. § A FINITELY-CONVERGENT COLUMN GENERATION ALGORITHM FOR THE ERSP The pricing problem features an elementary resource-constrained shortest path structure. For ERSP-Hom, it can be solved via a label-setting algorithm <cit.>. This approach is described in <ref> and will serve as a benchmark in this paper. However, path-based label-setting becomes intensive as paths become longer, and cannot readily handle heterogeneous charging costs in ERSP-Het. Our bi-level label-setting algorithm decomposes the pricing problem into subpaths (Section <ref>) and combines subpaths into paths (Section <ref>), as illustrated in Figure <ref>. We prove the exactness and finiteness of the overall column generation algorithm in Section <ref>. §.§ First-level Procedure: Generating Subpaths Definition <ref> introduces a subpath from a non-task node (depot or charging station) to another. A subpath s is defined by a node sequence U(s) = {n_0, n_1, ⋯, n_m}, such that (n_0, n_1), ⋯, (n_m-1, n_m) ∈, with starting node s=n_0 ∈_D ∪_R, intermediate nodes n_1, ⋯, n_m-1∈_T, and ending node s=n_m ∈_D ∪_R. The parameter is captures the number of times task i ∈_T is visited by the node sequence U(s): is = | k ∈0, …, m | n_k = i |. We define the elapsed time t^s, battery depletion b^s and cost c^s by: t^s = ∑_l=0^m-1 t(n_l, n_ℓ+1), b^s = ∑_l=0^m-1 b(n_l, n_ℓ+1), and c^s = ∑_l=0^m-1 c(n_ℓ, n_ℓ+1). Subpath s is feasible if t^s ∈ [0, T] and b^s ∈ [0, B], and elementary if is≤ 1 for all i∈_T. We store all feasible subpaths in and all elementary subpaths in ⊆. One difference between subpaths and paths is that a subpath can start and end at a charging stations, and must only visit task nodes in between. Another difference is that subpaths do not encapsulate charging decisions. Thus, subpath decomposition decouples routing-scheduling vs. charging decisions. The set of subpaths is therefore finite, in contrast with the infinitely-sized set of paths. Nonetheless, there exist an infinite number of possible charging decisions between subpaths, hence an infinite number of possible combinations of subpaths into full paths. The first-level dynamic programming procedure generates non-dominated subpaths, using standard label-setting arguments to optimize routing-scheduling decisions between a starting node and an ending node. This procedure extends partial subpaths along arcs until a depot or a charging station is reached. A partial subpath (resp. partial path) is defined similarly to a subpath (resp. path) except that the condition s∈_D ∪_R (resp. p∈_D) is relaxed. We denote by ^∘ and ^∘ the set of feasible partial subpaths from and of feasible partial paths from . For example, ^∘ stores all feasible partial subpaths and ^∘ stores all elementary feasible partial subpaths. Consider a feasible partial subpath s∈^∘ with node sequence {n_0, ⋯, n_m} such that s=n_m ∉_D ∪_R. For any arc a=(s,) ∈, we denote by s ⊕ a the extended partial subpath defined by the node sequence {n_0, ⋯, n_m, }. The extension is feasible if t^s + t(n_m,) ≤ T and b^s+b(n_m,) ≤ B. Given dual variables , , and , the reduced cost contribution s of a partial subpath s visiting n_0, ⋯, n_m is defined as: s = ∑_l=0^m-1( c(n_ℓ, n_ℓ+1) - n_ℓ+1∈_Tν_n_ℓ+1) - n_0 ∈_Dκ_n_0 - n_m ∈_Dμ_n_m Note that the reduced cost contributions, defined for partial subpaths, do not coincide with the reduced costs of decision variables, defined for paths. Rather, the reduced cost of a path is decomposable into the reduced cost contributions of its constituent subpaths plus the charging costs between subpaths (see Lemma <ref> later on). Importantly, the reduced cost contribution is decomposable across arcs, which will enable to generate subpaths via dynamic programming. We eliminate partial subpaths that cannot be part of a path of minimum reduced cost by applying domination criteria (Definition <ref>). Property <ref> specifies an important property that needs to be satisfied by the domination criteria—namely, that domination patterns must propagate along arc extensions. For completeness, we also provide in Property <ref> technical criteria that are necessary to ensure termination and exactness. Proposition <ref> provides domination and non-domination criteria for the ERSP that satisfy these domination and termination properties. Let (·, ·, ) define vectors of domination and non-domination criteria with respect to set . Partial subpath s_1 dominates s_2, written s_1 s_2 if s_1 = s_2 and s_1≤s_2 component-wise. Partial subpath s is non-dominated if no partial subpath s' ∈^∘ satisfies s' s. Let store the set of non-dominated subpaths, and store the set of non-dominated partial subpaths out of all subpaths in . [Domination criteria for subpaths] (·, ·, ) must satisfy: For feasible partial subpaths s_1,s_2∈^∘ such that s_1 s_2, and an extension a∈ of s_1 and s_2, either (a) s_2 ⊕ a∉^∘, or (b) s_1 ⊕ a∈^∘, s_2 ⊕ a∈^∘, and s_1 ⊕ a s_2 ⊕ a. The following criteria satisfy Properties <ref> and <ref>: For =: s = (s, s), s = (s, t^s, b^s), For =: s = (s, s), s = (s, t^s, b^s, {is}_i ∈_T). An arc extension a = (s, ) of subpath s yields the following updates: s ⊕ a = (s, ) s ⊕ a = s + c(s, ) - ∈_Tν_ - ∈_Dμ_ t^s ⊕ a = t^s + t(s, ) b^s ⊕ a = b^s + b(s, ) is ⊕ a = is + = i, ∀ i ∈_T (for =) Without elementarity, the algorithm maintains three domination criteria: reduced cost, time, and battery consumption. Thus, a subpath is dominated if another one ends in the same node earlier, using less charge, and contributing a smaller reduced cost. Elementarity requirements impose an extra label per task, which severely hinders tractability. This section proposes a two-level label-setting algorithm that can generate non-dominated paths in (with three-dimensional labels) or in (with high-dimensional labels); we address elementarity requirements in Section <ref>. Algorithm <ref> presents the first-level label-setting procedure. Starting at any non-task node (depot or charging station), it extends partial subpaths along arcs while ensuring feasibility and pruning all dominated partial subpaths, until reaching a depot or a charging station. Throughout, it maintains a set of non-dominated partial subpaths and a queue of partial subpaths. It is parametrized by the domination and non-domination criteria and the set of feasible subpaths. In particular, elementarity can be imposed by setting = or relaxed by setting =. Note that, despite the infinite set of paths, any partial subpath has finitely many extensions, and FindNonDominatedSubpaths converges finitely (this will be proved in Section <ref>). §.§ Second-level Procedure: Combining Subpaths into Paths §.§.§ Preliminaries. The second-level procedure optimizes routing-scheduling decisions by extending subpath sequences along subpaths, and optimizes charging decisions between subpaths. Throughout, it also applies domination criteria to eliminate dominated subpath sequences. A subpath sequence σ = {s_1, …, s_m} satisfies s_1, …, s_m ∈, σ = s_1∈_D, s_i = s_i+1∈_R for i ∈{1,⋯,m-1}, and σ = s_m∈_D ∪_R. It is feasible if there exists a feasible partial path p ∈^∘ with subpath sequence σ; and it is complete if σ∈_D. Let (resp. ) store feasible (resp. feasible complete) subpath sequences. Let ^∘(σ) ⊆^∘ (resp. (σ)⊆) store feasible partial paths (resp. feasible paths) with subpath sequence σ. By construction, all partial paths sharing a subpath sequence differ only in charging times. Lemma <ref> proves that the reduced cost of a path is decomposable into the reduced cost contribution of its subpath sequence and the charging costs between subpaths. A feasible partial path p ∈^∘ with subpath sequence σ = {s_1, …, s_m} and charging times {τ_1, …, τ_m-1} has reduced cost contribution p := ∑_i=1^m s_i + ∑_i=1^m-1τ_i·δ(s_i). The reduced cost of a path p ∈ is equal to its reduced cost contribution: p = p. Accordingly, the second-level procedure can be decomposed into routing-scheduling and charging decisions. The routing-scheduling goal is to generate subpath sequences with minimal reduced cost contribution, via a label-setting algorithm that extends subpath sequences along subpaths: For a feasible subpath sequence σ∈, s ∈^∘ is a subpath extension if σ = s. We denote by σ⊕ s the extended subpath sequence. The second goal is to set charging times between subpaths. For any subpath sequence, we keep track of the minimal partial path that minimizes the reduced cost contribution (Definition <ref>). Per Lemma <ref>, it is sufficient to keep track of all minimal partial paths, rather than all partial paths. For a feasible subpath sequence σ∈, σ∈^∘ denotes a feasible partial path with subpath sequence σ of minimum reduced cost contribution: σ∈p| p ∈^∘(σ) Let (·, ·, ) define vectors of domination and non-domination criteria with respect to set . Partial path p_1 dominates p_2, written p_1 p_2 if p_1 = p_2 and p_1≤p_2 component-wise. Partial path p is non-dominated if no partial path p' ∈^∘ satisfies p' p. Let () store non-dominated paths (partial paths). Thus, we define domination criteria for partial paths (Definition <ref>) and characterize domination patterns across subpath sequences in terms of their minimal partial paths. We denote by the set of non-dominated subpath sequences. The challenge in the charging step is to compute σ⊕ s as a function of σ for any extension of σ∈^∘. This is simple for ERSP-Hom, so we first focus on the routing-scheduling decisions for ERSP-Hom. We then address the more difficult charging decisions for ERSP-Het. §.§.§ Routing-scheduling decisions (ERSP-Hom). Property <ref> formalizes two properties that need to be satisfied by domination criteria for subpath sequences. Property <ref><ref> is analogous to Property <ref>, in that domination must propagate along subpath extensions. Property <ref><ref> arises from the fact that, in our second-level procedure, any subpath sequence can be extended through multiple subpaths ending in the same node. This contrasts with traditional label-setting procedure, where one arc connects a partial path to another node. Thus, Property <ref><ref> ensures that domination patterns also propagate backward along subpath extensions. Again, Property <ref> provides necessary termination criteria. Proposition <ref> identifies the domination and non-domination criteria used for the ERSP-Hom that satisfy these properties, and will be used in this paper. [Domination criteria for subpath sequences] The criteria for subpaths (·, ·,) and the criteria for subpath sequences (·, ·, ) must satisfy: * For feasible subpath sequences σ_1, σ_2 ∈ such that σ_1 σ_2, and a subpath s ∈ extending σ_1 and σ_2, either (a) σ_2 ⊕ s ∉, or (b) σ_1 ⊕ s ∈, σ_2 ⊕ s ∈, and σ_1 ⊕ s σ_2 ⊕ s. * For feasible subpaths s_1, s_2 ∈^∘ such that s_1 s_2, and a subpath sequence σ∈ extended by s_1 and s_2, either (a) σ⊕ s_2 ∉, or (b) σ⊕ s_1 ∈, σ⊕ s_2 ∈, and σ⊕ s_1 σ⊕ s_2. Together with the criteria for subpaths given in Proposition <ref>, the following criteria for subpath sequences satisfy Properties <ref> and <ref> for ERSP-Hom: For =: σ = (σ, σ), σ = ( σ, σ, -σ ), For =: σ = (σ, σ), σ = ( σ, σ, -σ, {iσ}_i ∈_T ). Let τ=b^s - σ. A subpath extension s of σ yields the following updates for ERSP-Hom: σ⊕ s = (σ, s) σ⊕ s = σ + δ·τ + s σ⊕ s = σ + τ + t^s - σ⊕ s = - σ - τ + b^s iσ⊕ s = iσ + is ∀ i ∈_T (for =) Without elementarity, the algorithm maintains three domination criteria—reduced cost, time, and the opposite of battery consumption. Thus, a subpath sequence is dominated if another one terminates in the same node earlier, adding more charge between subpaths, and contributing a smaller reduced cost. Note the difference in sign in the third term between the domination criteria for subpaths (Proposition <ref>) and subpath sequences (Proposition <ref>). This reflects that subpaths are stronger when they use less charge whereas subpath sequences are stronger when they add more charge between subpaths. Again, elementarity requires an extra label per task. <cit.> use a path-based label-setting algorithm using the criteria p = (p, p, p - p). This domination criteria is stronger than the one in Proposition <ref>, and is valid due to the absence of charging costs in their model. However, our criteria remain valid in the presence of charging costs, both in the ERSP-Hom and in the ERSP-Het. Algorithm <ref> presents the second-level label-setting procedure for ERSP-Hom. It takes as inputs the set of non-dominated subpaths (from Algorithm <ref>), along with the domination criteria · and · and the set of feasible subpath sequences . It maintains non-dominated subpath sequences in and a queue of subpath sequences in ; and it returns the set of non-dominated subpath sequences between each pair of depots. Upon termination, we translate all non-dominated complete subpath sequences into corresponding non-dominated minimal paths. Whereas Algorithm <ref> dealt with finitely many subpaths, Algorithm <ref> deals with infinitely many partial paths. The key idea underlying the algorithm is to evaluate an infinite number of partial paths via a finite number of subpath sequences. This is enabled by Lemma <ref> and Proposition <ref>, which reduce all partial paths associated with the same subpath sequence to the corresponding minimal partial path. In ERSP-Hom, σ⊕ s can be easily computed as a function of σ by merely adding required charging time prior to subpath s. In turn, any extension of a non-dominated subpath sequence remains non-dominated (Property <ref><ref>) and, as we shall see, Algorithm <ref> can then return all non-dominated paths. We now turn to the more difficult case of ERSP-Het. §.§.§ ERSP-Het. Let D ≤ | _R | be the number of coefficients out of δ(i) | i ∈_R, sorted as 0 < δ_1 < ⋯ < δ_D. Unlike in ERSP-Hom, the path that minimizes charging time may no longer minimize charging costs. In response, Proposition <ref> identifies a linear-time dynamic programming algorithm to re-optimize charging decisions in the second-level label-setting procedure, which yields σ⊕ s as a function of σ. Its proof formulates a linear optimization model for finding σ, and shows the optimality of the dynamic programming solution. It then leverages a representation of charging stations in a binary tree sorted by charging costs to “rebalance” the charging times of σ⊕ s (red in Figure <ref>) to cheaper ones in σ⊕ s (blue in Figure <ref>). For any subpath sequence σ∈ and any subpath s∈^∘, σ⊕ s can be computed via dynamic programming from σ in O(D) time and memory (Algorithm <ref>). The algorithm also returns Z_d(σ) for d ∈1,⋯,D-1, defined as the amount of charge that can be added at charging stations with unit costs δ_1, …, δ_d by rebalancing charging decisions. Another difference between ERSP-Hom and ERSP-Het is that the extension of subpath sequences may no longer maintain domination patterns: if σ_1σ_2 but σ_2 has more slack in “cheap” charging stations, then σ_1⊕ s may no longer dominate σ_2⊕ s. To circumvent this challenge, we leverage the outputs Z_1(σ),⋯,Z_D-1(σ) of Algorithm <ref> (Proposition <ref>). Specifically, consider a subpath sequence σ such that σ has a unit charging cost δ_d (e.g., δ_5 in Figure <ref>). Then Z_d(σ) characterizes the cost savings obtained by shifting charging times from σ to earlier ones with a lower unit cost (e.g., δ_1 and δ_3 in Figure <ref>). Proposition <ref> proves that adding -Z_1(σ),⋯,-Z_D-1(σ) in the domination criteria retrieves the critical property that σ_1 σ_2 implies σ_1 ⊕ s σ_2 ⊕ s (Property <ref><ref>), so that the extension of a non-dominated subpath sequence remains non-dominated. Together with the criteria for subpaths given in Proposition <ref>, the following criteria for subpath sequences satisfy Properties <ref> and <ref> for ERSP-Het: σ = (σ, σ), For =: σ = ( σ, σ, - σ, { -Z_d(σ) }_1,⋯,D-1 ) For =: σ = ( σ, σ, - σ, { -Z_d(σ) }_1,⋯,D-1, {iσ}_i ∈_T ) Let τ=b^s - σ. A subpath extension s of σ yields the following update: σ⊕ s = σ + s + g(τ; Z_1(σ), …, Z_D-1(σ)), where g(τ; Z_1(σ), …, Z_D-1(σ)) denotes the charging costs from rebalancing charging from more expensive charging stations to cheaper ones (Figure <ref>). Thus, any subpath sequence extension adds a routing-scheduling cost s and leads to possible cost savings from charging re-optimization. For completeness, the other updates are reported in <ref>. The proposition also highlights the role of the extended domination criteria, in that subpath sequence σ_1 dominates σ_2 if it terminates in the same node earlier, adding more charge, contributing a smaller reduced cost, and featuring more savings opportunities from charging (i.e., Z_d(σ_1) ≥ Z_d(σ_2) for all d=1,⋯,D-1). Note that heterogenous charging costs (with D charge levels) requires D-1 additional labels. In practice, these costs remain moderate when the number of charging costs remain small (e.g., a few ownership structures and technologies across charging stations). We can also reduce domination comparisons: if the ending node has unit cost δ_f, it is sufficient to check whether Z_d(σ_1) ≥ Z_d(σ_2) for d=1,⋯,f-1. Altogether, our bi-level label-setting procedure yields the first exact optimization approach that can handle electric routing with heterogeneous charging costs. §.§.§ Finiteness and exactness. Theorem <ref> establishes the exactness of Algorithms <ref> and <ref> for the pricing problem, which completes the subpath-based decomposition at the core of the methodology. The proof proceeds by showing that any non-dominated subpath sequence can be decomposed into non-dominated subpaths between charging stations, and that the corresponding minimal path yields the path of minimal reduced cost. This result underscores the critical role of the dedicated domination criteria developed in this section (Propositions <ref> and <ref> for the ERSP-Hom and ERSP-Het). Moreover, this section formalizes arguments commonly used in the vehicle routing literature, through Properties <ref>–<ref> and Properties <ref>–<ref>. This rigorous axiomatic approach will guarantee the exactness of several variants of our pricing problem algorithm in Section <ref>. If (·, ·,) and (·, ·, ) satisfy Properties <ref>, <ref>, <ref>, and <ref>, FindNonDominatedSubpaths and FindSubpathSequences terminate finitely and return all minimal paths from non-dominated complete subpath sequences. If the algorithm returns no path of negative reduced cost, then all path-based variables have non-negative reduced cost. Altogether, the two-level label-setting algorithm replaces a large path-based dynamic program with multiple small subpath-based dynamic programs (first level, Algorithm <ref>) and a medium-sized dynamic program (second level, Algorithm <ref>). In Section <ref>, we establish its computational benefits over a path-based benchmark for the ERSP-Hom. §.§ Finite convergence and exactness of the column generation algorithm Armed with the two-level label-setting pricing algorithm, column generation expands the ERSP formulation iteratively by adding paths of negative reduced cost until none exists. Two questions remain: [(i)] * whether this procedure terminates finitely, and * whether it returns the optimal relaxation () upon termination. As opposed to traditional column generation applications, these questions are not immediate in the ERSP due to the infinite set of paths . Theorem <ref> answers both positively, by showing the finite convergence and the exactness of our overall solution scheme (Algorithms <ref>, <ref> and <ref>). Again, the proof proceeds by decomposing the semi-infinite structure of () into discrete routing decisions (dealt with by label-setting in Algorithms <ref>–<ref>) and continuous charging decisions (dealt with by our re-balancing procedure in Proposition <ref>). Specifically, we group the infinitely many paths according to the finite set of subpath sequences. This results in an equivalent formulation which only considers minimal paths—one per subpath sequence—which the column generation algorithm solves exactly in a finite number of iterations. For any path set , ColumnGeneration() terminates finitely with an optimal solution of (), when Step 2. is solved via FindNonDominatedSubpaths and FindNonDominatedPaths and (·, ·, ·, ·, ) satisfy Properties <ref>, <ref>, <ref>, and <ref>. § TIGHTER RELAXATIONS VIA ADAPTIVE NG-RELAXATIONS AND CUTTING PLANES We augment the column generation algorithm from Algorithm <ref> to tighten the ERSP relaxation via adaptive ng-relaxations and limited-memory subset-row inequalities (lm-SRI). For both extensions, we develop dedicated domination criteria in our bi-level label-setting algorithm and prove that the augmented column generation algorithm terminates finitely with tighter relaxations. For conciseness, we focus on ERSP-Hom in this section but provide all results for ERSP-Het in <ref>. §.§ Adaptive ng-relaxations for elementarity constraints §.§.§ Adaptive ng-relaxations. Recall that imposing full elementarity in the pricing problem requires one extra label per task; in contrast, considering the full set of plans would lead to a weaker relaxation—notably, the solution can feature many cycles of length two. We leverage adaptive ng-relaxations to solve () over an increasingly small set of paths ⊆⊆ toward deriving a solution of the tightest relaxation () without imposing full elementarity. An ng-neighborhood is a collection of subsets = N_i ⊆ | i ∈ where: [(i)] * i ∈ N_i, ∀ i ∈; * N_i ⊆_T, ∀ i ∈_T; and * N_i ⊆_T ∪{ i }, ∀ i ∈_D ∪_R. A path is ng-feasible with respect to ng-neighborhood if its node sequence satisfies: for every j < k with n_j = n_k, there exists j < ℓ < k with n_j ∉ N_n_ℓ. Let () (resp. ^∘()) store the ng-feasible paths (resp. partial paths) with respect to . Intuitively, ng-feasible paths are “locally elementary”, in that task i can only be performed multiple times if a task whose ng-neighborhood does not contain i is performed in between. As long as ng-neighborhoods are large enough, the ng-relaxation eliminates paths with short cycles. In particular, the size of the ng-neighborhood impacts the tightness of the () relaxation (Lemma <ref>): at one extreme, = (^no) with the smallest ng-neighborhoods (N^no_i = {i}, ∀ i∈); vice versa = (^elem) with the largest ng-neighborhoods (N^elem_i = _T ∪{i}, ∀ i∈). Let ^1 and ^2 be two ng-neighborhoods such that N_i^1 ⊆ N_i^2 for all i ∈. Then, (^1) ⊇(^2), and ((^1)) ≤((^2)). We adopt the adaptive ng-relaxation approach from <cit.>, which alternates between solving (()) and expanding to eliminate non-elementary paths (Steps 1–3 of Algorithm <ref>). By design, the ng-neighborhood expansion in Step 3 renders the incumbent path ng-infeasible, thus tightening the relaxation. In turn, the adaptive ng-relaxation yields an optimal solution to () without ever imposing full elementarity in the pricing problem. The key question involves computing ng-feasible paths in the pricing problem. In traditional (path-based) label-setting algorithms, this is done by keeping track of the forward ng-set, defined as the set of nodes that cannot be appended to a path while retaining ng-feasibility; accordingly, a partial path p∈^∘() can be extended along arc a = (p, ) if and only if ∉Π(p) (see Proposition <ref> and <cit.>). This structure retains an edge-based decomposition amenable to dynamic programming. However, standard domination criteria are no longer sufficient to ensure the propagation of domination patterns in our bi-level label-setting algorithm. §.§.§ ng-relaxations in our bi-level label-setting algorithm. We augment our algorithm with three domination criteria for subpaths, formalized in Definition <ref>: (i) forward ng-set Π(s), (ii) backward ng-set Π^-1(s), and (iii) ng-residue. The forward ng-set is defined as the set of nodes that cannot be appended to a subpath while retaining ng-feasibility. Vice versa, the backward ng-set is defined as the set of nodes that cannot precede the subpath while retaining ng-feasibility. Both of these notions were introduced by <cit.> in the context a bi-directional path-based label-setting algorithm. In this paper, we prove that forward and backward ng-sets are necessary to ensure the validity of our (unidirectional) bi-level label-setting algorithm. We also introduce the notion of ng-residue to update the backward ng-set in our forward label-setting procedure. Consider a subpath s with node sequence U(s) = {n_0, ⋯, n_m}. Its forward ng-set, backward ng-set, and ng-residue with respect to ng-neighborhood are defined as: Π(s) = n_r | n_r ∈⋂_ρ = r + 1^m N_n_ρ, r ∈{0, ⋯, m-1}∪{n_m} Π^-1(s) = {n_0}∪ n_r | n_r ∈⋂_ρ = 0^r-1 N_n_ρ, r ∈{1, ⋯, m} Ω(s) = ⋂_ρ = 0^m N_n_ρ As in path-based label-setting, forward ng-sets extend domination forward so that, if s_1 s_2, then s_1 ⊕ a s_2 ⊕ a (Property <ref>); and, if σ_1σ_2, then σ_1 ⊕ sσ_2 ⊕ s (Property <ref><ref>). Backward ng-sets are needed to extend domination backward in our second-level procedure (Algorithm <ref>) so that, if s_1 s_2, then σ⊕ s_1σ⊕ s_2 (Property <ref><ref>). Finally, the ng-residue Ω(·) is required to update Π(σ⊕ s) in terms of Π(s). In contrast, the domination criteria for subpath sequences only make use of forward ng-sets, as in traditional path-based label-setting algorithms. Proposition <ref> proves the validity of these domination criteria for (()) (Proposition <ref> provides the analogous statement for (())). It also shows that these domination criteria enable to check ng-feasibility easily in our bi-level label-setting algorithm. In the first-level procedure, an arc extension of a subpath retains ng-feasibility if and only if the next node is not in the forward ng-set. This condition mirrors the one in traditional label-setting algorithm. In the second-level procedure, a subpath extension of a subpath sequence retains ng-feasibility if and only if the forward ng-set of the subpath sequence and the backward ng-set of the subpath do not have any node in common except the current charging station (see Figure <ref>). In other words, the domination criteria proposed in this section enable to generate ng-feasible paths while retaining an effective dynamic programming decomposition in our bi-level label-setting algorithm. Properties <ref>, <ref>, <ref> and <ref> for (()) are satisfied with: s = ( s, t(s), b(s), {i ∈Π(s)}_i ∈_T, {i ∈Ω(s)}_i ∈_T, {i ∈Π^-1(s)}_i ∈_T) σ = ( σ, σ, -σ, {i ∈Π(σ)}_i ∈_T) An extension s ⊕ a of an ng-feasible partial subpath s is ng-feasible if and only if ∉Π(s), where a = (s, ). An extension σ⊕ s of an ng-feasible subpath sequence σ is ng-feasible if and only if Π(σ) ∩Π^-1(s) ⊆{s}. These extensions yield the following updates: Π(s ⊕ a) = ( Π(s) ∩ N_) ∪{} Ω(s ⊕ a) = Ω(s) ∩ N_ Π^-1(s ⊕ a) = Π^-1(s) ∪ ( {}∩Ω(s) ) Π(σ⊕ s) = Π(s) ∪( Π(σ) ∩Ω(s) ) In summary, although our bi-level label-setting algorithm is uni-directional, it requires domination criteria based on forward and backward ng-sets to guarantee ng-feasibility, because multiple non-dominated subpaths can extend subpath sequences between the same pair of nodes in our second-level procedure. Computationally, since Π(s) ⊆ N_i, Π^-1(s) ⊆ N_i, and Ω(s) ⊆ N_i, the state space of ng-resources is at most 2^3|N_i| for ng-feasible partial subpaths ending in node i, versus 2^|_T| with full elementarity, thus alleviating the computational requirements of our algorithm. Finally, our general framework from Section <ref> (namely, Properties <ref>, <ref>, <ref>, and <ref>) enables to extend Theorems <ref> and <ref>, so the column generation algorithm can solve any ng-relaxation (()). Using adaptive ng-relaxations, we conclude that Steps 1–3 of Algorithm <ref> solve () without ever using the expensive elementarity domination criteria is and iσ. Our results in Section <ref> show the significant computational benefits of this algorithmic approach. §.§ Cutting planes: Limited-memory Subset-Row Inequalities (lm-SRI) §.§.§ lm-SRI cuts. <cit.> defined subset-row inequalities (SRIs) as rank-1 Chvátal-Gomory cuts from elementarity constraints (Equation (<ref>)): for any subset S ⊆_T, and non-negative weights w_i | i ∈ S, the following constraints define valid inequalities for (): ∑_i ∈ S∑_p ∈ w_i γ_i^p z^p ≤∑_i ∈ S w_i ∑_p ∈α_S, (p) z^p ≤⌊∑_i ∈ S w_i ⌋, with α_S, (p) = ⌊∑_i ∈ S w_i γ_i^p⌋ <cit.> extended these into limited-memory SRIs (lm-SRIs), by defining coefficients α_(S, M, )(p) for any S ⊆_T, S ⊆ M ⊆ (M is called memory), and w_i | i ∈ S, such that ∑_p ∈α_S, M, (p) z^p ≤⌊∑_i ∈ S w_i ⌋ is valid for (). These coefficients were originally defined algorithmically (Algorithm <ref> in <ref>); we provide instead an algebraic definition: Consider a path p with node sequence { n_0, ⋯, n_m}. Let I_1, ⋯, I_r be the non-overlapping sets of consecutive indexes in {0, ⋯, m} such that n_i ∈ M_q i ∈ I_1 ∪⋯∪ I_r. Then α_(S, M, )(p) = ∑_ℓ=1^r⌊∑_i ∈ I_ℓn_i ∈ S_q w_n_i⌋. Note that lm-SRI cuts generalize SRI cuts because α_S, M, (p) = ⌊α_S, (p) ⌋ with full memory (i.e., if M = _T). In our implementation, to simplify the separation problem, we restrict our attention to lm-SRI cuts with |S| = 3 and w_i = 1/2 for all i ∈ |S| (as in <cit.>). We index the lm-SRI cuts by q∈, and let (S_q,M_q, ^q, λ_q) | q ∈ store the sets S_q⊆_T, the memories M_q, the weight vectors ^q, and the dual variables λ_q of Equation (<ref>). The reduced cost of a path becomes: p = c^p - ∑_j ∈_Dp=jκ_j - ∑_j ∈_Dp=jμ_j - ∑_i ∈_Tγ_i^p ν_i - ∑_q ∈λ_q ·α_S_q, M_q, ^q(p) Note that lm-SRI cuts are non-robust, in that they alter the structure of the pricing problem. In traditional (path-based) label-setting, each lm-SRI cut requires an extra label p called forward lm-SRI resource. However, this domination criterion is no longer sufficient in our bi-level label-setting algorithm. In this sense, lm-SRI cuts are analogous to ng-relaxations, since the ng-sets { N_i | i ∈} can be viewed as memory tracking the elementarity of a node sequence; similarly, the sets M_q serve as memory for keeping track of visits to each node i ∈ S_q in the reduced cost computation (Equation (<ref>)). Again, this structure necessitates extended—bidirectional—domination criteria. §.§.§ lm-SRI cuts in our bi-level label-setting algorithm. We capture lm-SRI cuts via two extra domination labels for subpaths, which characterize forward and backward lm-SRI resources. Consider a subpath s with node sequence { n_0, ⋯, n_m}, a cut q with S_q ⊆ M_q and ^q, and I_1, ⋯, I_r from Definition <ref>. The forward and backward lm-SRI resources are: s = n_m ∈ M_qfrac( ∑_i ∈ I_rn_i ∈ S_q w^q_n_i), s = n_0 ∈ M_qfrac( ∑_i ∈ I_1n_i ∈ S_q w^q_n_i) The backward lm-SRI resource is equivalent to the forward lm-SRI resource of the reverse node sequence. Together, they track the term - λ_q α_S_q, M_q, ^q(p) of the reduced cost contribution (Equation (<ref>)) when combining subpaths into paths. Specifically, the forward lm-SRI resource computes the contribution from the memory in the subsequent subpath, and the backward lm-SRI resource computes the contribution in the preceding subpath. Proposition <ref> (resp. Proposition <ref>) uses these labels to build domination criteria for (()) (resp. (())). In particular, the proof relies on the fact that S_q⊆_T, so that charging stations do not contribute to forward and backward lm-SRI resources. This property enables the decomposability of the forward and backward lm-SRI resources across subpaths, thus exploiting the subpath-based decomposition structure of our bi-level label-setting algorithm to ensure correctness when integrating lm-SRI cuts. Properties <ref>, <ref>, <ref> and <ref> for (()) are satisfied with the domination criteria from Proposition <ref>, after replacing s_1≤s_2 in the definition of s_1 s_2 with: s_2-s_1 ≥ - ∑_q ∈ λ_q U(s_1) ⊈M_q, U(s_2) ⊈M_q ( s_1 > s_2 + s_1 > s_2 ) - ∑_q ∈ λ_q U(s_1) ⊈M_q, U(s_2) ⊆M_q ( s_1 > s_2, s_1 > s_2, s_1 > s_2 + s_2 - s_1 ≤s_2, s_1 ≤s_2, s_1 ≤s_2 + s_2 - 1 + 1 ) - ∑_q ∈ λ_q U(s_1) ⊆M_q, U(s_2) ⊈M_q ( s_1 > s_2, s_1 > s_2, s_1 + s_1 - 1 > s_2 - s_1 ≤s_2, s_1 ≤s_2, s_1 + s_1 ≤s_2 + 1 ) - ∑_q ∈ λ_q U(s_1) ⊆M_q, U(s_2) ⊆M_q ( s_1 > s_2 ) and after replacing σ_1≤σ_2 in the definition of σ_1 σ_2 with: σ_2-σ_1≥ - ∑_q ∈λ_q σ_1 > σ_2 Extensions yield the following updates, which, again, are amenable to dynamic programming: s ⊕ a = s + c(s, ) - ∈_Tν_ - ∈_Dμ_ - ∑_q ∈λ_q s + w_^q ≥ 1 ∈ S_q s ⊕ a = 0 if ∉ M_q frac( s + ∈ S_q w^q_) if ∈ M_q s ⊕ a = frac( s + U(s) ⊆ M_q∈ S_q w^q_) σ⊕ s = σ + δ·τ + s - ∑_q ∈λ_q σ + s≥ 1 σ⊕ s = frac( s + U(s) ⊆ M_qσ) Again, the general framework from Section <ref> extends Theorems <ref> and <ref> in the presence of lm-SRI cuts. In turn, Algorithm <ref> solves the ERSP relaxation with elementary paths and lm-SRI cuts. §.§ Summary Algorithm <ref> tightens the ERSP relaxation using adaptive ng-relaxations to enforce elementarity requirements and lm-SRI cuts to eliminate fractional solutions. The main difficulty is to ensure the validity of our bi-level label-setting algorithm to solve the resulting pricing problems. In response, we have proposed forward and backward domination criteria that carry over domination patterns when combining subpaths into full paths. Leveraging these results (Propositions <ref>, <ref>, <ref>, and <ref>) and those from Section <ref> (Theorem <ref>), we obtain a guarantee of finite convergence and exactness of the resulting column generation algorithm. This is formalized in Theorem <ref>. Algorithm <ref> terminates in a finite number of iterations. Steps 1–3 return (), and Steps 1–4 return a solution such that () ≤≤(). § COMPUTATIONAL RESULTS We evaluate the numerical performance of our bi-level label-setting algorithm toward solving large-scale ERSP instances without time windows. We generate synthetic instances in a rectangular area armed with a Euclidean distance. Depots are located in the four corners and charging stations at other lattice points. Tasks are uniformly generated within the rectangle. We consider a linear battery depletion rate μ per unit of distance. We vary the number of tasks |_T|, the geographic area, the scaled time horizon T/B. We create 20 randomized instances for each combination of parameters. Throughout, we report the relaxation bounds from the column generation algorithms and the optimality gap achieved with a primal solution obtained by solving the master problem with integrality constraints upon termination. This problem features a highly complex combinatorial optimization structure due to the multiple depots, the presence of multiple charging stations (which lead to long paths and the difficulties of coordinating routing-scheduling and charging decisions, as discussed in this paper) and the absence of time windows (which restricts pruning in the label-setting algorithms, leading to a large number of partial paths for any number of tasks). All models are solved with Gurobi v10.0, using the JuMP package in Julia v1.9 <cit.>. All runs are performed on a computing cluster hosting Intel Xeon Platinum 8260 processors, with a one-hour limit <cit.>. To enable replication, source code and data can be found in an online repository. §.§.§ Benefits of bi-level label-setting algorithm. We first compare the computational times of our bi-level label-setting algorithm for the pricing problem to the path-based label-setting benchmark of <cit.>. This benchmark applies a label-setting procedure to generate full paths using domination criteria comprising reduced cost, time, time minus charge and additional labels to handle charging decisions. In contrast, our bi-level label-setting algorithm generates subpaths between charging actions and combines them into paths, using the domination criteria specified in Propositions <ref> and <ref>. Since the benchmark cannot accommodate heterogeneous charging costs, we assume here that δ(i) = 0 for all i ∈_R and therefore focus on (). Table <ref> reports the average time of the column generation algorithm as a function of the number of tasks, the area, and the scaled time horizon. We implement our algorithm and the benchmark with three path sets: [(i)] * no elementarity (i.e., =); * full elementarity (i.e., =); and * a static ng- relaxation (i.e., =()) with N_i comprising the ⌈√(|_T|)⌉ closest tasks for i ∈_T and N_i = { i } for i∈_D∪_R. Figure <ref> summarizes the results along two axes: the scaled time horizon T/B, and task density per unit area, for and (). These results show that our bi-level label-setting algorithm results in significant computational improvements against the path-based benchmark. By design, both algorithms generate the same relaxation bounds in the same number of iterations. However, in all instances solved by the path-based benchmark, column generation terminates 50%–90% faster when solving the pricing problem with our bi-level label-setting algorithm. These benefits are highly robust across parameter settings and relaxations. Moreover, our algorithm scales to larger and more complex instances than the benchmark, with full elementarity and 20–24 tasks. These results highlight the impact of the methodology developed in this paper on the computational performance of the pricing problem, hence of the overall column generation algorithm. Figure <ref> shows that the benefits of the bi-level label-setting algorithm are strongest with a larger scaled time horizon and a higher task density. These axes correlate with the number of subpaths per path and the length of each subpath, respectively. In other words, the algorithm is most impactful when each subpath encapsulates multiple tasks and each path encapsulates multiple subpaths. In this regime, the algorithm enables effective decomposition by replacing a large dynamic program with many small ones at the first level and a moderately-sized one at the second level. §.§.§ Benefits of forward and backward domination criteria for ng-relaxations. We compare the solution obtained with static and adaptive ng-relaxations to the solutions obtained with no and full elementarity restrictions. The implementations with no elementarity (=), full elementarity (=) and static ng-relaxations (=()) rely on Steps 1–2 of Algorithm <ref>. For the static ng-relaxations, we consider ng-neighborhoods comprising the closest N_ng tasks to each node, with N_ng = ⌈√(|_T|)⌉, N_ng = ⌈√(|_T|)⌉, and N_ng = ⌈|_T| / 3⌉. These three settings correspond to small, medium, and large ng-neighborhoods, respectively. The adaptive ng-relaxations start for those same ng-neighborhoods and then apply Steps 1–3 of the algorithm to iteratively tighten the ng-relaxation. Recall, importantly, that static and adaptive ng-relaxations require our forward and backward domination criteria from Section <ref>, as opposed to relying on the basic scheme from Section <ref>. Table <ref> reports the average computational times, relaxation bounds (normalized to the best bound ()) and optimality gaps for each relaxation and three different problem sizes. The main observation is that ng-relaxations provide significant accelerations versus the full elementary relaxation, and much stronger relaxations versus the basic relaxation with no elementarity restriction. Notably, the no-elementarity relaxation leaves a very large optimality gap ranging 50–100%; in comparison, the adaptive ng-relaxations improve the relaxation bound by 15% and bring the optimality gaps down to 5–10%. The adaptive ng-relaxations consistently return the strongest possible relaxation in a fraction of the time as compared to the basic column generation scheme on the full elementary relaxation (). For example, our algorithm terminates in less than 20 seconds with 20 tasks, versus over half an hour when solving () directly; and it scales to larger problems on which the () relaxation fails to terminate within one hour. The adaptive ng-relaxations yield the tightest possible relaxation bound regardless of the initial ng-neighborhoods. Interestingly, they terminate slightly faster with smaller initial neighborhoods, although the static ng-relaxations get tighter as the ng-neighborhoods become larger. Thus, these results indicates the strength of the adaptive procedure itself to generate strong ng-neighborhoods efficiently. These observations underscore the computational benefits of relying on labels driven by the size of the ng-neighborhoods, as opposed to one label per task with the full elementarity restriction. They also highlight the benefits of our tailored forward and backward domination criteria in our bi-level label-setting algorithm, as compared to relying on the basic criteria from Section <ref>. §.§.§ Algorithm scalability. We conclude these experiments by reporting results of the full solution algorithm (Algorithm <ref>), incorporating ng-relaxations and lm-SRI cuts—using both sets of forward and backward domination criteria provided in Proposition <ref> and <ref>. Figure <ref> plots the optimality gap and computational times for the ERSP-Hom and the ERSP-Het using the basic column generation scheme (Steps 1–2 of Algorithm <ref>), the ng-relaxation (Steps 1–3) and the lm-SRI cuts (Steps 1–4). The lm-SRI cuts are instrumental in tightening the relaxation of the ERSP (Figure <ref>). As noted earlier, the elementary relaxation (obtained with the adaptive ng-relaxations) leaves an optimality gap of 5–10%, but the lm-SRI cuts reduce the gap to 0.2–5%. As expected, these improvements come at the cost of longer computational times (Figure <ref>), since the pricing problem uses an extra domination label per cut (Equation (<ref>)). Still, the algorithm returns provably near-optimal solutions (within 5% of the optimum) in manageable computational times (within one hour) for problems with up to 40 task nodes. The algorithm returns consistent optimality gaps—if anything, slightly lower ones—as charging costs become more heterogeneous across charging stations (Figure <ref>). As expected, more charging cost levels increase computational times (Figure <ref>) due to the extra domination labels (Proposition <ref>). Nonetheless, the overall stability in computational times indicates our algorithm's ability to handle heterogeneous charging costs in the ESRP, with similarly high-quality solutions and only slightly longer computational times. Finally, Figure <ref> shows that our methodology results in a Pareto improvement over state-of-the-art methods for the ERSP-Hom: better primal solutions and stronger relaxation bounds in shorter computational times. The state-of-the-art benchmark considered here combines the path-based label-setting algorithm from <cit.> (already considered in Table <ref>) with adaptive ng-relaxations and lm-SRI cuts. Note that the ng-relaxations and lm-SRI cuts only require the forward domination criteria in the benchmark, as opposed to forward and backward domination criteria in our bi-level label-setting algorithm. In medium-scale instances (Figure <ref>), our algorithm achieves a tight optimality gap in seconds to minutes, versus minutes to hours for the benchmark. In large-scale instances (Figure <ref>), neither method returns an optimal solution; still, our method yields a stronger primal solution and a stronger relaxation bound after 10 minutes than the benchmark after one hour, on average. Moreover, our algorithm exhibits lower performance variability across instances, which also enhances the reliability of the overall methodology. In summary, the methodology developed in this paper provides two major contributions: (i) it scales to large and otherwise-intractable ERSP-Hom instances, yielding win-win-win outcomes reflected in higher-quality solutions and tighter relaxations in faster computational times; and (ii) it provides the first solution approach to handle heterogeneous charging costs in the ERSP-Het. §.§.§ Practical impact. We conclude by assessing the practical benefits of the optimization methodology against practical benchmarks that could be more easily implemented in practice. We first evaluate the impact of jointly optimizing routing-scheduling and charging decisions. Figure <ref> reports the percent-wise improvements of our solution against a sequential route-then-charge benchmark for the ERSP-Hom. This benchmark first optimizes routing-scheduling decisions without consideration for charging requirements (using traditional routing-scheduling algorithms), and then appends charging decisions to ensure sufficient battery levels. Results show that the integrated optimization approach can yield up to 8% reductions in operating costs. The gains become smaller as the scale of the problem increases due to the difficulty to find near-optimal solutions in the integrated problem. Nonetheless, the benefits of integrated optimization can be highly significant, especially under low task density—that is, when charging decisions become more critical. Next, we evaluate the impact of capturing heterogeneous charging costs in the ERSP-Het—an important feature in practice, as discussed earlier <cit.>. Figure <ref> compares the solution to one obtained with the ERSP-Hom model, using existing algorithms. Results show that the ERSP-Het solution results in 5-20% reductions in charging costs. These benefits are again most significant under low density. Moreover, they also increase as the number of different charge levels gets larger, in which case accounting for heterogeneous charging costs becomes more important. We also observe non-increasing returns, suggesting that significant savings in charging costs can even be achieved with a small number of charging cost levels. Altogether, these findings underscore that electrification does not merely require downstream adjustments in business-as-usual operations; instead, it necessitates comprehensive re-optimization to create synergistic routing, scheduling and charging operations. Dedicated optimization tools such as the one developed in this paper can therefore yield strong performance improvements in electrified operations, both in economic terms—reduction in operating costs—and in sustainability terms—adoption of electrification technologies with more limited environmental footprint. § CONCLUSION This paper considers an electric routing-scheduling problem, which augments canonical vehicle routing and scheduling problems with electrified operations. The problem jointly optimizes routing-scheduling and charging decisions, with flexibility regarding where, when and for how long to charge. We formulate it as a semi-infinite optimization problem given the infinite number of charging decisions. We develop a column generation methodology based on a bi-level label-setting algorithm that separates routing-scheduling and charging decisions in the pricing problem. Specifically, a first-level procedure generates subpaths between charging decisions, and a second-level procedure combines subpaths to reconstruct full paths. The methodology can accommodate, via extra labels, new modeling features (e.g., heterogeneous charging costs) and recent advances in routing algorithms (e.g., ng-relaxations and lm-SRI cuts). We formally prove that the resulting column generation algorithm terminates in an finite number of iterations with exact relaxation bounds. Extensive computational experiments yield three main takeaways. First, the bi-level label-setting algorithm achieves significant speedups as compared to traditional path-based label-setting methods, and can solve tight relaxations in manageable computational times. In turn, our methodology scales to otherwise-intractable problems, by returning higher-quality solutions in faster computational times than state-of-the-art benchmarks. Second, this paper provides the first exact methodology to handle heterogeneous charging costs in electric routing-scheduling optimization. Third, the methodology can provide strong practical benefits, with significant reductions in operating costs and a concomitant reduction in carbon emissions. At a time where decarbonization goals require fast and large-scale electrification, these benefits can magnify the adoption and impact of electrified technologies across the logistics, service and manufacturing industries. informs2014 Electronic Companion § PATH-BASED LABEL-SETTING BENCHMARK We outline the path-based label-setting procedure for EVRP-Hom, which we use as a benchmark in the paper. It is also useful to introduce some techniques used in our bi-level label-setting procedure. Proofs from this section are omitted for conciseness, because they are similar to (and much simpler than) those of our algorithm, and they follow standard arguments in vehicle routing. §.§.§ General label-setting benchmark. Recall that a path starts from the source at the beginning of the planning horizon with full charge, and ends at the sink by the end of the planning horizon, while maintaining a non-negative level of charge throughout (Definition <ref>). The pricing problem seeks a path of minimal reduced cost, given in Equation (<ref>). Consider a feasible partial path p∈^∘ with node sequence {n_0, ⋯, n_m} such that s=n_m ∉_D, and with charging time sequence C(p) = τ_k | k ∈m-1, n_k ∈_R. An extension a of p comprises an arc (n_m, n_m+1) ∈ and a charging time τ_m ≥ 0 n_m∈_R. The extension is feasible if t^s + t(n_m,n_m+1) ≤ T and b^s+b(n_m,n_m+1) ≤ B if n_m∉_R; and if t^s + τ_m + t(n_m,n_m+1) ≤ T and min(b^s+τ_m,B)-b(n_m,n_m+1) ≥ 0 if n_m∈_R. We denote by p ⊕ a the extended partial path defined by node sequence U(p ⊕ a) = {n_0, ⋯, n_m, n_m+1} and charging time sequence C(p ⊕ a) = τ_k | k ∈m, n_k ∈_R. Consider partial path p∈^∘ with node sequence {n_0, ⋯, n_m} such that s=n_m ∉_D, and with charging time sequence C(p) = τ_k | k ∈m-1, n_k ∈_R. Given dual variables , , and , its reduced cost contribution is: p = ∑_l=0^m-1 ( c(n_l, n_l+1) + n_l ∈_R ·δ·τ_l - n_l+1 ∈_T ν_n_l+1 ) - κ_n_0 - n_m ∈_D μ_n_m The main difference between the extension of a path and the extension of a subpath is that the former encapsulates a charging decision if the current node is a charging station, whereas the latter is restricted to routing-scheduling decisions. Similarly, the reduced cost contribution of a partial path includes the cost of charging, whereas this cost component is moot in a partial subpath. We define necessary conditions for path-based domination criteria in Proposition <ref>. We also complement it with path-based termination criteria in Property <ref>. Proposition <ref> provides domination and non-domination criteria that satisfy these properties. [Domination criteria for paths] (·, ·, ) must satisfy: For feasible partial paths p_1,p_2∈^∘ such that p_1 p_2, and an extension a∈ of p_1 and p_2, either (a) p_2 ⊕ a∉^∘, or (b) p_1 ⊕ a∈^∘, p_2 ⊕ a∈^∘, and p_1 ⊕ a p_2 ⊕ a. [Termination criteria for paths] (·, ·, ) must satisfy: * One component of · captures the reduced cost contribution of partial path s ∈^∘. * One component of · is nonnegative, strictly monotone, and bounded by a constant. The following criteria satisfy Property <ref> and <ref> for EVRP-Hom: For =: p = (p, p), p = ( p, p, -p ), For =: p = (p, p), p = ( p, p, -p, { ip }_i ∈_T) ). An extension of path p with arc a=(p, ) and charging time τ (if applicable) yields: p ⊕a = (p, ) p ⊕a = p + c(p, ) + p ∈_R ·δ·τ - ∈_T ν_ - ∈_D μ_ p ⊕a = p + p ∈_R ·τ+ t(p, ) - p ⊕a = max{ - p - p ∈_R ·τ, -B } + b(p, ) ip ⊕a = ip + = i, ∀ i ∈_T (with elementarity constraints) Algorithm <ref> presents the path-based label-setting algorithm. This algorithm is similar to Algorithm <ref>, except that it starts and ends at a depot, and that the partial path extensions can visit charging stations in-between. As in the case of subpaths, elementarity (or relaxations thereof) of feasible paths p ∈⊂ can be imposed on the partial paths p' ∈^∘⊂^∘. Theorem <ref> shows that Algorithm <ref> yields the set of non-dominated paths with respect to path set , as long as the non-domination and domination criteria satisfy Properties <ref> and <ref>. If (·, ·, ) satisfy Property <ref> and <ref>, Algorithm <ref> returns the set of non-dominated paths with respect to . If ≠∅, then ∩≠∅. Algorithm <ref> involves infinitely many possible extensions of any partial path ending at a charging station, due to the infinitely-sized of charging times. Accordingly, Theorem <ref> establishes the exactness of the algorithm upon termination but does not guarantee finite convergence—unlike Theorem <ref>. Any practical implementation of Algorithm <ref> must therefore specify a rule to handle the infinite number of possible extensions at charging stations <cit.>. Our paper proposes an alternative approach via a two-level label-setting algorithm that generates subpaths from and to charging actions and then combines them into full paths. § PROOFS IN SECTION 4 Properties <ref> and <ref> are necessary conditions to extend domination patterns along subpaths and subpath sequences. We complement them with technical conditions that are necessary for termination. [Termination criteria for subpaths] (·, ·, ) must satisfy: * One component of · captures the reduced cost contribution s of partial subpath s ∈. * One component of · is nonnegative, strictly monotone, and bounded by a constant. [Termination criteria for subpath sequences] (·, ·, ) and (·, ·, ) must satisfy: * One component of · captures the reduced cost contribution σ of the minimal path σ. Moreover, if σ_1 ≽σ_2 ∈, then σ_1≤σ_2. * One component of · is nonnegative, strictly monotone, and bounded by a constant. §.§ Proof of Lemma <ref>. Let p ∈ be a feasible path, with complete subpath sequence σ = {s_1, …, s_k} and charging time sequence {τ_1, …, τ_k-1}. For j ∈{1,⋯,k}, let subpath s_j have node sequence U(s_j) = { n_j,0, …, n_j,m_j}. (For consistency we must have n_j,m_j = n_j+1,0 for all j ∈{1,⋯,k-1}). By definition of a path and a subpath, we have n_1,0, n_k,m_k∈_D, n_1,m_1, n_2,0, …, n_k-1, m_k-1, n_k,0∈_R, and n_j,l∈_T for all j ∈{1,⋯,k}, l ∈{1, …, m_j-1}. Therefore, the reduced cost of p is: p = c^p - ∑_j ∈_D p = j κ_j - ∑_j ∈_D p = j μ_j - ∑_i ∈_T γ_i^p ν_i (by Equation (<ref>)) = ∑_j=1^k ∑_l=0^m_j-1 c(n_j,l, n_j,l+1) + ∑_j=1^k-1 δ(s_j) ·τ_j - κ_n_1,0 - μ_n_k,m_k - ∑_j=1^k ∑_l=0^m_j-1 ν_n_j,l+1 (by Definition <ref>) = ∑_j=1^k [ ∑_l=0^m_j-1 ( c(n_j,l, n_j,l+1) - n_j,l+1 ∈_T ν_n_j,l+1 ) - j=1 κ_n_j,0 - j=k μ_n_j,m_j ] + ∑_j=1^k-1 δ(s_j) ·τ_j = ∑_j=1^k s_j + ∑_j=1^k-1 δ(s_j) ·τ_j (by Definition <ref>) = p §.§ Proof of Proposition <ref>. We first prove Equations (<ref>)–(<ref>). Let s be a partial subpath with node sequence {n_0, …, n_m} and a = (s, ) = (n_m, n_m+1). In particular, n_m∉_D. We have: s ⊕a = (n_0, n_m+1) = (s, ). s ⊕a = ∑_ℓ=0^m-1 ( c(n_ℓ, n_ℓ+1) - n_ℓ+1 ∈_T ν_n_ℓ+1 ) - n_0 ∈_D κ_n_0 + c(n_m, n_m+1) - n_m+1 ∈_T ν_n_m+1 - n_m+1 ∈_D μ_n_m+1 = s + c(s, ) - ∈_T ν_ - ∈_D μ_. t^s ⊕a = ∑_ℓ=0^m t(n_ℓ, n_ℓ+ 1) = ∑_ℓ=0^m-1 t(n_ℓ, n_ℓ+ 1) + t(s, ) = t^s + t(s, ). b^s ⊕a = ∑_ℓ=0^m b(n_ℓ, n_ℓ+ 1) = ∑_ℓ=0^m-1 b(n_ℓ, n_ℓ+ 1) + b(s, ) = b^s + b(s, ). is ⊕a = | n ∈U(s ⊕a) | n = i | = | n ∈U(s) | n = i | + = i = is + = i. Let us prove that · and · satisfy Properties <ref> and <ref>. Property <ref>: Let s_1, s_2 ∈^∘ be partial subpaths starting in and ending in with s_1 s_2, i.e., s_1 = s_2 and s_1≤s_2 component-wise. Let a = (, ) be an arc extension of s_1 and s_2. First, s_1 ⊕ a = s_2 ⊕ a = (, ). Suppose that s_2 ⊕ a ∈^∘, i.e., t^s_2 ⊕ a∈ [0, T] and b^s_2 ⊕ a∈ [0, B]. We show that s_1 ⊕ a≤s_2 ⊕ a, using Definitions <ref> and <ref>: s_1 ⊕a = s_1 + c(, ) - ∈_T ν_ - ∈_D μ_ ≤s_2 + c(, ) - ∈_T ν_ - ∈_D μ_ = s_2 ⊕a t^s_1 ⊕a = t^s_1 + t(, ) ≤t^s_2 + t(, ) = t^s_2 ⊕a b^s_1 ⊕a = b^s_1 + b(, ) ≤b^s_2 + b(, ) = b^s_2 ⊕a is_1 ⊕a = is_1 + = i ≤is_2 + = i = is_2 ⊕a (with elementarity) Moreover, s_1 ⊕ a is a feasible partial subpath because: t^s_2 ⊕ a∈ [0, T] and 0 ≤ t^s_1 ⊕ a≤ t^s_2 ⊕ a t^s_1 ⊕ a∈ [0, T] b^s_2 ⊕ a∈ [0, B] and 0 ≤ b^s_1 ⊕ a≤ b^s_2 ⊕ a b^s_1 ⊕ a∈ [0, B] is_1 ⊕ a≤is_2 ⊕ a≤ 1 is_1 ⊕ a∈{0, 1} (with elementarity) Property <ref>: The first component captures the reduced cost contribution. The second component captures the time; it is non-negative, strictly monotone because min{ t_i,j : (i, j) ∈}>0, and bounded above by T. §.§ Proof of Proposition <ref>. We first introduce some definitions pertaining to subpath sequences: For a subpath sequence σ, we define its node sequence U(σ) as the node sequence of the concatenation of s_1, …, s_m, without double-counting the charging stations: U(σ⊕ s) = U(σ) ∪( U(s) ∖{s}) We first prove Equations (<ref>)–(<ref>). Consider a subpath sequence σ = {s_1, …, s_m}∈ and an extension s∈. We have: σ⊕s = (σ⊕s, σ⊕s) = (σ, s). ∀ i ∈_T, iσ⊕s = | n ∈U(σ⊕s) | n = i | = | n ∈U(σ) | n = i | + | n ∈(U(s) ∖{s}) | n = i | = iσ + is. This proves Equations (<ref>) and (<ref>). Equations (<ref>)–(<ref>) are due the following lemma. For ERSP-Hom, given a feasible subpath sequence {s_1, …, s_m}, define the subsequences σ_j = {s_1, …, s_j}. There exists a sequence of charging times {τ_1, …, τ_m-1} such that {τ_1, …, τ_j-1} is the charging sequence of σ_j for all j ∈{1, …, m}, defined by: τ_j = b^s_j+1 - B - ∑_i=1^j b^s_i Therefore, σ_j is the path with reduced cost contribution σ_j, ending at time σ_j with charge σ_j with: σ_j = ∑_i=1^j s_i + δ·∑_i=1^j b^s_i - B σ_j = ∑_i=1^j t^s_i + ∑_i=1^j b^s_i - B σ_j = B - ∑_i=1^j b^s_i Proof of Lemma <ref>. To determine σ_m for the full subpath sequence, we need to determine a sequence of charging times {τ_1, …, τ_m-1} such that all intermediate partial paths have sufficient charge. From Definition <ref>, this can be formulated as the following optimization problem: min_τ_1, …, τ_m-1 ∑_i=1^m-1 δ·τ_i s.t. ∑_i=1^j+1 b^s_i - B ≤ ∑_i=1^j τ_i ≤ ∑_i=1^j b^s_i ∀ j ∈{1,⋯,m-1} τ_i ≥0, ∀ i ∈{1,⋯,m-1} The optimal objective value must be at most ∑_i=1^m b^s_i - B (implied by the last constraint). This objective value is attainable by the solution: ∑_i=1^j τ_i = ∑_i=1^j+1 b^s_i - B, ∀ j ∈{1,⋯,m-1} This implies, for all j ∈{1,⋯,m-1}: τ_j = ∑_i=1^j+1 b^s_i - B - ∑_i=1^j b^s_i - B = max{ b^s_j+1 - x - -x , - -x } with x = B - ∑_i=1^j b^s_i = max{ b^s_j+1 - x , - -x } = max{ b^s_j+1 - x , 0 } (considering the cases x ≥ 0 and x < 0) = b^s_j+1 - B - ∑_i=1^j b^s_i For each j ∈{1,⋯,m}, the partial solution {τ_1, …, τ_j-1} is also optimal for the optimization problem defined by σ_j. Since the total amount charged is ∑_i=1^j-1τ_i = ∑_i=1^j b^s_i - B, this proves Equations (<ref>)–(<ref>). For Equation (<ref>), we have by recursion: σ_j = σ_j-1 + τ_j-1 - b^s_j = … = B - b^s_1 + τ_1 - b^s_2 + …+ τ_j-1 - b^s_j = B - ∑_i=1^j b^s_i + ∑_i=1^j-1 τ_i = B - ∑_i=1^j b^s_i + ∑_i=1^j b^s_i - B = B - ∑_i=1^j b^s_i This completes the proof of the lemma. We now verify Equations (<ref>)–(<ref>). Letting σ = {s_1, …, s_m} be extended by s_m+1, and defining τ = b^s_m+1 - σ, τ = b^s_m+1 - σ = b^s_m+1 - B - ∑_i=1^m b^s_i (by Lemma <ref>) = ∑_i=1^m+1 b^s_i - B - ∑_i=1^m b^s_i - B Therefore: σ⊕s_m+1 = ∑_i=1^m+1 s_i + δ·∑_i=1^m+1 b^s_i - B (by Lemma <ref>) = ∑_i=1^m s_i + δ·∑_i=1^m b^s_i - B + δ·τ+ s_m+1 = σ + δ·τ+ s_m+1 (by Lemma <ref>) σ⊕s_m+1 = ∑_i=1^m+1 t^s_i + ∑_i=1^m+1 b^s_i - B (by Lemma <ref>) = ∑_i=1^m t^s_i + ∑_i=1^m b^s_i - B + τ+ t^s_m+1 = σ + τ+ t^s_m+1 (by Lemma <ref>) - σ - τ+ b^s_m+1 = b^s_m+1 - σ - b^s_m+1 - σ = - σ - b^s_m+1 = - B - ∑_i=1^m b^s_i - b^s_m+1 (by Lemma <ref>) = - B - ∑_i=1^m+1 b^s_i = - σ⊕s_m+1 (by Lemma <ref>) Next, let us use Equations (<ref>)–(<ref>) to prove that · and ·, along with · and ·, satisfy Property <ref>. Starting with Property <ref><ref>, consider partial subpath sequences such that σ_1 σ_2, i.e., σ_1 = σ_2 and σ_1≤σ_2 component-wise. Let s be a subpath extension of σ_1 and σ_2. First, σ_1 ⊕ s = σ_2 ⊕ s = (σ_1, s). Suppose that σ_2 ⊕ s ∈, i.e., σ_2 ⊕ s∈ [0, T] and -σ_2 ⊕ s∈ [-B, 0]. We show that σ_1 ⊕ s≤σ_2 ⊕ s: σ_1 ⊕s = σ_1 + s + δ·b^s - σ_1 (Equation (<ref>)) ≤σ_2 + s + δ·b^s - σ_2 (since σ_1 ≤σ_2 and σ_1 ≥σ_2) = σ_2 ⊕s (Equation (<ref>)) σ_1 ⊕s = σ_1 + t^s + b^s - σ_1 (Equation (<ref>)) ≤σ_2 + t^s + b^s - σ_2 (since σ_1 ≤σ_2 and σ_1 ≥σ_2) = σ_2 ⊕s (Equation (<ref>)) -σ_1 ⊕s = - σ_1 + b^s - b^s - σ_1 (Equation (<ref>)) = min{ b^s - σ_1, 0 } ≤min{ b^s - σ_2, 0 } (since -σ_1 ≤-σ_2) = -σ_2 ⊕s (Equation (<ref>)) Additionally, if =, since iσ_1≤iσ_2 for all tasks i ∈_T; we have: iσ_1 ⊕s = iσ_1 + is ≤iσ_2 + is = iσ_2 ⊕s Moreover, σ_1 ⊕ s is a feasible subpath sequence because: σ_2 ⊕s ∈[0, T] and 0 ≤σ_1 ⊕s ≤σ_2 ⊕s σ_1 ⊕s ∈[0, T] -σ_2 ⊕s ∈[-B, 0] and -B ≤-σ_1 ≤-σ_1 ⊕s ≤-σ_2 ⊕s -σ_1 ⊕s ∈[-B, 0] iσ_1 ⊕s ≤iσ_2 ⊕s ≤1 iσ_1 ⊕s ∈{0, 1} Let us now prove that · and ·, along with · and ·, satisfy Property <ref><ref>. Consider a partial subpath sequence σ∈, and let s_1, s_2 ∈ be subpaths extending σ such that s_1 s_2, i.e., s_1 = s_2 and s_1≤s_2 component-wise. First, σ⊕ s_1 = σ⊕ s_2 = (σ, s_1). Suppose that σ⊕ s_2 ∈, i.e., σ⊕ s_2∈ [0, T] and -σ⊕ s_2∈ [-B, 0]. We show that σ⊕ s_1≤σ⊕ s_2: σ⊕s_1 = σ + s_1 + δ·b^s_1 - σ (Equation (<ref>)) ≤σ + s_2 + δ·b^s_2 - σ (since s_1 ≤s_2 and b^s_1 ≤b^s_2) = σ⊕s_2 (Equation (<ref>)) σ⊕s_1 = σ + t^s_1 + b^s_1 - σ (Equation (<ref>)) ≤σ + t^s_2 + b^s_2 - σ (since t^s_1 ≤t^s_2 and b^s_1 ≤b^s_2) = σ⊕s_2 (Equation (<ref>)) -σ⊕s_1 = min{ b^s_1 - σ, 0 } (Equation (<ref>)) ≤min{ b^s_2 - σ, 0 } (since b^s_1 ≤b^s_2) = -σ⊕s_2 (Equation (<ref>)) Additionally, if =, since is_1≤is_2 for all tasks i ∈_T; we have: iσ⊕s_1 = iσ + is_1 ≤iσ + is_2 = iσ⊕s_2 Moreover, σ⊕ s_1 is a feasible subpath sequence because: σ⊕s_2 ∈[0, T] and 0 ≤σ⊕s_1 ≤σ⊕s_2 σ⊕s_1 ∈[0, T] -σ⊕s_2 ∈[-B, 0] and -B ≤-σ⊕s_1 ≤-σ⊕s_2 -σ⊕s_1 ∈[-B, 0] iσ⊕s_1 ≤iσ⊕s_2 ≤1 iσ⊕s_1 ∈{0, 1} We conclude by proving Property <ref>. The first component captures the reduced cost contribution and, σ_1 σ_2 ∈ implies σ_1≤σ_2. The second component captures the time, which is again non-negative, strictly monotone, and bounded. §.§ Proof of Proposition <ref>. The proof comes in three parts. First, we formulate a linear optimization model to determine the charging times τ_1, ⋯, τ_m-1 for a given subpath sequence σ∈. Second, we introduce the linear-time dynamic program which recovers an optimal solution of the optimization problem recursively. Third, we show that the algorithm determines σ⊕ s as a function of σ. §.§.§ Linear optimization formulation. Let σ = {s_1, ⋯, s_m} be a subpath sequence. Let δ^i = δ(s_i) > 0 denote the charging cost at the end of subpath i ∈{1,⋯,m-1}. The following optimization problem minimizes charging costs, while ensuring that the state of charge does not exceed the battery capacity (first constraint, expressing that the charging level is less than the battery consumption) and that the machine does not run out of battery (second constraint, expressing that the charging level is less than the battery required until the next charging station). min ∑_j=1^m-1 δ^j τ_j ∑_j=1^i τ_j ≤∑_j=1^i b^s_j ∀ i ∈{1,⋯,m-1} ∑_j=1^i τ_j ≥∑_j=1^i+1 b^s_j - B ∀ i ∈{1,⋯,m-1} τ_i ≥0 ∀ i ∈{1,⋯,m-1} Per Lemma <ref>, this optimization problem finds σ∈p | p ∈(σ) since the subpath sequence determines the reduced cost contribution of the subpaths, so minimizing the charging costs is equivalent to minimizing the reduced cost for any subpath sequence. §.§.§ Dynamic programming algorithm. Algorithm <ref> presents the dynamic programming procedure to finds the charging time sequence of a subpath sequence σ. It takes as input the charge requirements of the constituting subpaths {b^s_1, ⋯, b^s_m} and the corresponding charging cost coefficients {δ^1, ⋯, δ^m-1}. It returns the charging time sequence of σ. The algorithm proceeds by greedily maximizing the time spent at the cheapest remaining charging station, and by separating the problem into the preceding sequence and the following sequence. We prove the optimality of the charging time τ_ℓ at the cheapest remaining charging station, and then proceeds to show that the two subproblems exhibit the same structure as the overall problem. This induces a binary tree decomposition of the problem, visualized in Figure <ref>, using the same example as in Figure <ref> in the main text (replicated in Figures <ref> and <ref>). If ^⋆ is optimal for (<ref>), it satisfies ∑_j=1^m b^s_j - B = ∑_j=1^m-1τ_j. §.§.§ Proof of Lemma <ref>. The equality is obvious if ∑_j=1^m b^s_j≤ B because the second constraint of (<ref>) implies that τ^⋆_i = 0 for all i=1,⋯,m-1 at optimality. Let us assume that ∑_j=1^m b^s_j > B. Per the second constraint, there exists i=1,⋯,m-1 such that τ_i^⋆>0. Let i^* = max i ∈{1,⋯,m-1} | τ^⋆_i > 0. Assume by contradiction that ∑_j=1^m-1τ_j>∑_j=1^m b^s_j - B. Then: ∑_j=1^i^* τ^⋆_j = ∑_j=1^m-1 τ^⋆_j > ∑_j=1^m b^s_j - B ≥∑_j=1^i^*+1 b^s_j - B We can define τ'_i=τ^⋆_i for i≠ i^* and τ'_i^*=τ^⋆_i^*-ε for ε>0 small enough. The solution is feasible and achieves a cost of ∑_j=1^m-1δ^j τ'_j=∑_j=1^m-1δ^j τ^⋆_j-εδ^i^*, contradicting the optimality of ^⋆. The next lemma formalizes the tree-based decomposition. It starts by identifying a charging station with the lowest unit costs, and maximizes the amount of charge added at that charging station. Specifically, the first equation shows that the minimum amount of charge is added beforehand to power the machine until the cheapest charging station. The next two equations show that the maximum admissible amount of charge is added at the cheapest charging station. Let ℓ∈δ^i | i ∈{1,⋯,m-1}. There exists an optimal solution ^⋆ for (<ref>) satisfying: ∑_j=1^ℓ-1 τ^⋆_j = ∑_j=1^ℓb^s_j - B ∑_j=1^ℓ τ^⋆_j = min{ ∑_j=1^ℓb^s_j, ∑_j=1^m b^s_j - B } τ^⋆_ℓ = min{ ∑_j=1^ℓb^s_j, ∑_j=1^m b^s_j - B } - ∑_j=1^ℓb^s_j - B §.§.§ Proof of Lemma <ref>. Suppose by contradiction that ∑_j=1^ℓ-1τ^⋆_j > ∑_j=1^ℓ b^s_j - B. Let i <ℓ be the largest index such that τ^⋆_i > 0. There exists ε > 0 small enough such that one can decrement τ^⋆_i by ε and increment τ^⋆_ℓ by ε while maintaining feasibility, resulting in a decrease in the objective by ε· (δ^i - δ^ℓ) ≥ 0. The process is repeated until ^⋆ satisfies ∑_j=1^ℓ-1τ^⋆_j = ∑_j=1^ℓ b^s_j - B. Therefore, there exists an optimal solution such that ∑_j=1^ℓ-1τ^⋆_j = ∑_j=1^ℓ b^s_j - B. Next, the second equality is obvious if ∑_j=1^m b^s_j - B =0 because no charging is required in that case, so ∑_j=1^ℓτ^⋆_j=0. Let us assume that ∑_j=1^m b^s_j - B >0. By contradiction, assume that ∑_j=1^ℓτ^⋆_j < min{∑_j=1^ℓ b^s_j, ∑_j=1^m b^s_j - B }≤∑_j=1^m b^s_j - B. Using Lemma <ref>, this implies that ∑_j=ℓ+1^m-1τ_j=∑_j=1^m-1τ_j-∑_j=1^ℓτ_j=(∑_j=1^m b^s_j - B)-∑_j=1^ℓτ_j>0. Let i >ℓ be the first index such that τ^⋆_i > 0. Due to the assumption that ∑_j=1^ℓτ^⋆_j<∑_j=1^ℓ b^s_j, there exists ε > 0 such that one can decrement τ^⋆_i by ε and increment τ^⋆_ℓ by ε while maintaining feasibility. This deviation decreases cost by ε· (δ^i - δ^ℓ) > 0. Therefore, there exists an optimal solution satisfying the second equality. The third equality is obtained by merely subtracting the first two equations. We can now prove that Algorithm <ref> recovers an optimal solution of (<ref>). Let us introduce the truncation of the overall problem between charging stations c_1 and c_2: ρ(c_1,c_2)=min ∑_j=c_1^c_2-1 δ^j τ_j ∑_j=c_1^i τ_j ≤∑_j=c_1^i b^s_j ∀ i ∈{c_1,⋯,c_2-1} ∑_j=c_1^i τ_j ≥∑_j=c_1^i+1 b^s_j - B ∀ i ∈{c_1,⋯,c_2-1} τ_i ≥0 ∀ i ∈{c_1,⋯,c_2-1} With this notation, Equation (<ref>) is equivalent to ρ(1,m). We prove by induction over the number of subpaths c_2-c_1 that ρ(c_1,c_2) can be determined by Algorithm <ref>. The result is true for c_2-c_1=1 as a direct corollary of Lemma <ref>. Let us assume that it is true for c_2-c_1-1 and prove it for c_2-c_1. Let ℓ∈δ^i | i ∈{1,⋯,m-1} (breaking ties by taking the largest index). Per Lemma <ref>, one can separately optimize over {τ_1, ⋯, τ_l-1} and {τ_l+1, ⋯, τ_m-1}: [t]0.42 min ∑_j=1^ℓ-1 δ^j τ_j s.t. ∑_j=1^i τ_j ≥∑_j=1^i+1 b^s_j - B ∀ i ∈{1,⋯,ℓ-1} ∑_j=1^i τ_j ≤∑_j=1^i b^s_j ∀ i ∈{1,⋯,ℓ-1} τ_i ≥0 ∀ i ∈{1,⋯,ℓ-1} [t]0.54 min ∑_j=ℓ+1^m-1 δ^j τ_j s.t. ∑_j=ℓ+1^i τ_j ≥∑_j=1^i+1 b^s_j - B - min{ ∑_j=1^ℓ b^s_j, ∑_j=1^m b^s_j - B } ∀ i ∈{ℓ+1, ⋯, m-1} ∑_j=ℓ+1^i τ_j ≤∑_j=1^i b^s_j - min{ ∑_j=1^ℓ b^s_j, ∑_j=1^m b^s_j - B } ∀ i ∈{ℓ+1, ⋯, m-1} τ_i ≥0 ∀ i ∈{ℓ+1, ⋯, m} Problem () is a smaller version of (<ref>), equal to ρ(1,ℓ-1). If ∑_j=1^ℓ b^s_j≥∑_j=1^m b^s_j - B, then Problem () has { 0, ⋯, 0 } as an optimal solution because the first constraint amounts to ∑_j=ℓ+1^iτ_j≥ 0. Otherwise, the first constraint amounts to ∑_j=ℓ+1^iτ_j≥∑_j=ℓ+1^i+1 b^s_j - B, and the second constraint amounts to ∑_j=ℓ+1^iτ_j≤∑_j=ℓ+1^i b^s_j. In that case, Problem () is equivalent to ρ(ℓ+1,m). Per the induction hypothesis, both problems can be solved by Algorithm <ref>. This completes the proof that Algorithm <ref> recovers an optimal solution of (<ref>). §.§.§ Recovering σ⊕ s from σ. Here, we show how the charging time sequence for the subpath sequence σ = {s_1, ⋯, s_m}∈ and charge cost coefficients {δ^1, ⋯, δ^m-1} as computed by Algorithm <ref> can be modified when σ is extended by the subpath s_m+1 starting with charge cost coefficient δ^m = δ(s_m). The proof relies on the binary tree representation (Figure <ref>) and the rebalancing of charging times from more to less expensive charging stations. Let = {τ_1, ⋯, τ_m-1} be the charging time sequence of σ. We define: ℓ = δ^i | i ∈{1,⋯,m-1} (we break ties by taking the largest index) ω_1(σ) = max i ≥ℓ|δ^i = δ_1 ω_2(σ) = max i ≥ℓ, i ≥ω_1(σ) |δ^i = δ_2 ⋯ ω_D(σ) = max i ≥ℓ, i ≥ω_1(σ), ⋯, i ≥ω_D-1(σ) |δ^i = δ_D , where by convention ω_1(σ),⋯,ω_D(σ) is 0 if the set is empty. These variables ω_1(σ), ⋯, ω_D(σ) define indices i where the subpath sequence stops at a charging station associated with cost δ_1,⋯,δ_D at later stages (after visiting the cheapest charging station). Intuitively, Lemma <ref> showed that as much charging as possible is performed at charging station ℓ. Thus, upon extending σ by a subpath, the rebalancing involves increasing the extent of charging performed at charging stations ℓ+1,⋯,m-1 in order to power the last subpath; variables ω_1(σ),⋯,ω_D(σ) index charging stations at which τ_i can potentially increase, by increasing order of unit charging cost. We also define Z_d(σ) as the extent of “rebalancing” that can occur at charging stations with unit costs δ_1, …, δ_d after visiting a more expensive charging station: Z_0(σ) = 0, and Z_d(σ) = ∑_j = 1^max{ω_1(σ), ⋯, ω_d(σ)} (b^s_j - τ_j), ∀ d ∈{1, …, D} By definition Z_d(σ) increases with d, so the difference terms Z_d(σ) - Z_d-1(σ) is nonnegative. For notational convenience, we denote by Y_d(σ) = Z_d(σ) - Z_d-1(σ)≥0. The following result provides recursive expressions for the new charging sequence upon a subpath sequence extension. Consider a subpath sequence σ = {s_1, ⋯, s_m}∈; let = {τ_1, ⋯, τ_m-1} be the charging times in σ; and let s_m+1∈ be a subpath that extends σ. Let f∈{1,⋯,D} be such that δ^m = δ(s_m) = δ_f. Define τ as the extra charge required in the extended subpath sequence: τ = min{ b^s_m+1, ∑_j=1^m+1 b^s_j - B }. The new quantities are defined as follows upon the extension of σ into σ⊕ s_m+1: ℓ^new = m if δ^m ≤δ^ℓ ℓ otherwise ω_d(σ⊕s_m+1) = ω_d(σ) if d ≤f-1 m if d = f 0 if d≥ f+1 τ_j^new = τ_j + min{ Z_d(σ) - Z_d-1(σ), τ- Z_d-1(σ) } if j=ω_d(σ);d≤ f-1 τ_j for other j≤ m-1 τ- Z_f-1(σ) for j = m Z_d(σ⊕s_m+1) = 0 if 1 ≤d ≤κ Z_d(σ) - τ if κ+ 1 ≤d ≤f - 1 min{ ∑_j=1^m b^s_j, B - b^s_m+1 } otherwise where κ = max i ≤f-1 | τ≥Z_i(σ) §.§.§ Proof of Lemma <ref>. Throughout the proof, we use Q_i_1^i_2 = ∑_j=i_1^i_2 b^s_j to refer to the amount of charge used between subpaths i_1 and i_2 (inclusive). Proof of Equation (<ref>). By definition, if the last charging station m is (weakly) cheaper than the cheapest one, we update the index ℓ to m. Otherwise, the index ℓ remains unchanged. Proof of Equation (<ref>). With δ(s_m) = δ_f, there are three possibilities: – If d ≤ f-1, ω_d(σ⊕ s_m+1) = ω_d(σ). This can be proved easily by induction. First, ω_1(σ⊕ s_m+1)=maxi ≥ℓ^new|δ^i = δ_1=maxi ≥ℓ|δ^i = δ_1=ω_1(σ) because 1<f by assumption and therefore ℓ^new=ℓ. Then, assuming that the equality holds up to index d-1, we have: ω_d(σ⊕ s_m+1) = maxi ≥ℓ^new,i ≥ω_1(σ⊕ s_m+1),⋯,i ≥ω_d-1(σ⊕ s_m+1)|δ^i = δ_d = maxi ≥ℓ,i ≥ω_1(σ),⋯,i ≥ω_d-1(σ)|δ^i = δ_d =ω_d(σ), where the second equality comes from the induction hypothesis and the fact that d<f (hence, ℓ^new=ℓ per Equation (<ref>)). – If d = f, ω_f(σ⊕ s_m+1) = max i ≥ℓ, ω_1, ⋯, ω_f-1 | δ^i = δ_f =m; – If d > f, ω_d(σ⊕ s_m+1) = 0 because ω_f(σ⊕ s_m+1) = m. Proof of Equation (<ref>). Let G=| i =1,⋯,D |ω_i(σ) > 0 | and let ξ_1, ξ_2, ⋯, ξ_G store the corresponding indices. Notably, we have ω_ξ_1(σ)=ℓ. Also by construction, ω_d(σ)=0 for each d∈{ξ_g+1,⋯,ξ_g+1-1}. Then, Z_d(σ) is a staircase function along d, with steps (of possibly zero height) at indices d = ξ_1, ξ_2, ⋯, ξ_G: 0=Z_0(σ) = ⋯= Z_ξ_1 - 1(σ) ≤Z_ξ_1(σ) = ⋯= Z_ξ_2 - 1(σ) ≤⋯≤Z_ξ_G(σ) = ⋯= Z_D(σ) To simplify the proof, we consider the tree construction shown in Figure <ref>. For each index i ∈{1,⋯,m-1}, there must exist a function call for FindChargeSequence for which τ^⋆_i was determined for subpath sequence σ. Representing these function calls as nodes, a node's left subtree is all function calls induced by the left subproblem (), and a node's right subtree is all function calls induced by the right subproblem (). In fact, if we represent each node by the corresponding index, we obtain a sorted binary tree; each node's index is greater than all node indices in the left subtree and smaller than all node indices in the right subtree. With this construction, the nonzero elements in {ω_1(σ), ⋯, ω_D(σ) } (i.e., those at indices ξ_1, ξ_2, ⋯, ξ_G) correspond to the root node and all right children per the definition of ℓ in Step 3 of Algorithm <ref>). We proceed by induction over g such that ξ_g∈{1,⋯,f-1}. We treat ξ_G=f separately. Proof for ξ_1 When node ω_ξ_1(σ) is computed in the sorted binary tree, its left child ρ(1,ω_ξ_1(σ) - 1) remains unchanged in the old and new trees. This means that τ_1, ⋯, τ_ω_ξ_1(σ) - 1 do not change between the old and new trees. Turning to τ_ω_ξ_1(σ)=τ_ℓ, we have from Lemma <ref>: τ_ω_ξ_1(σ)^new - τ_ω_ξ_1(σ) = min{ Q_1^ω_ξ_1(σ), Q_1^m+1 - B} - min{ Q_1^ω_ξ_1(σ), Q_1^m - B} = 0 if B ≤ Q_ω_ξ_1(σ)+1^m min{ B - Q_ω_ξ_1(σ)+1^m, Q_1^ω_ξ_1} if Q_ω_ξ_1(σ)+1^m < B ≤ Q_ω_ξ_1(σ)+1^m+1 min{Q_1^m+1 - B, b^s_m+1} if Q_ω_ξ_1(σ)+1^m+1 < B Now, recall that: Z_1(σ)=Y_1(σ) = min{ Q_1^ω_ξ_1, B - Q_ω_ξ_1(σ)+1^m } We obtain Equation (<ref>) for ξ_1: min{ Y_ξ_1(σ), τ- Z_ξ_1-1(σ) } = min{ Q_1^ω_ξ_1, B - Q_ω_ξ_1(σ)+1^m , b^s_m+1, Q_1^m+1 - B } = 0 if B ≤Q_ω_ξ_1(σ)+1^m min{ B - Q_ω_ξ_1(σ)+1^m, Q_1^ω_ξ_1 } if Q_ω_ξ_1(σ)+1^m < B ≤Q_ω_ξ_1(σ)+1^m+1 (since b^s_m+1 ≥B - Q_ω_ξ_1(σ)+1^m) min{ Q_1^m+1 - B, b^s_m+1 } if Q_ω_ξ_1(σ)+1^m+1 < B (since Q_1^ω_ξ_1 > Q_1^m+1 - B) = τ_ω_ξ_1(σ)^new - τ_ω_ξ_1(σ) Assume that Equation (<ref>) holds for ξ_g-1; let us prove it for ξ_g≤ f-1 When node ω_ξ_g(σ) is computed in the sorted binary tree, its left child ρ(ω_ξ_g-1(σ)+1,ω_ξ_g(σ) - 1) remains unchanged in the old and new trees. This means that τ_ω_ξ_g-1(σ)+1, ⋯, τ_ω_ξ_g(σ)-1 do not change between the old and new trees. To see how the subsequent charging times get re-allocated, we first extend Lemma <ref> to the subsequent portion of the subpath sequence. The intuition and proof are identical to that of Lemma <ref>, meaning that as much charging as possible needs to be added at ω_ξ_g(σ) because later charging stations are more expensive. We omit the proof for conciseness. If ^⋆ is optimal for (<ref>), it satisfies: ∑_ j=ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ)-1 τ^⋆_j = Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) - B ∑_ j=ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) τ^⋆_j = min{ Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) , Q_ ω_ξ_g-1(σ)+1 ^ m - B } τ^⋆_ω_ξ_g(σ) = min{ Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) , Q_ ω_ξ_g-1(σ)+1 ^ m - B } - Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) - B We obtain: τ_ω_ξ_g(σ)^new - τ_ω_ξ_g(σ) = min{ Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) , Q_ ω_ξ_g-1(σ)+1 ^ m+1 - B } - min{ Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) , Q_ ω_ξ_g-1(σ)+1 ^ m - B } = 0 if B ≤Q_ ω_ξ_g(σ)+1 ^ m min{ B - Q_ ω_ξ_g(σ)+1 ^ m , Q_ ω_ξ_g-1(σ)+1 ^ ω_ξ_g(σ) } if Q_ ω_ξ_g(σ)+1 ^ m < B ≤Q_ ω_ξ_g(σ)+1 ^ m+1 min{ Q_ ω_ξ_g-1(σ)+1 ^ m+1 - B , b^s_m+1 } if Q_ ω_ξ_g(σ)+1 ^ m+1 < B Moreover, we have: Z_ξ_g - 1(σ) = ∑_ j=1 ^ ω_ξ_g-1(σ) b^s_j - ∑_ j=1 ^ ω_ξ_g-1(σ) τ_j (by Equation (<ref>)) = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_ j=1 ^ ω_ξ_1(σ) τ_j - ∑_ j=ω_ξ_1(σ) + 1 ^ ω_ξ_2(σ) τ_j - …- ∑_ j=ω_ξ_g-2(σ) + 1 ^ ω_ξ_g-1(σ) τ_j = Q_ 1 ^ ω_ξ_g-1(σ) - min{ Q_ 1 ^ ω_ξ_1(σ) , Q_1^m - B } - min{ Q_ ω_ξ_1(σ) + 1 ^ ω_ξ_2(σ) , Q_ ω_ξ_1(σ) + 1 ^ m - B } - …- min{ Q_ ω_ξ_g-2(σ) + 1 ^ ω_ξ_g-1(σ) , Q_ ω_ξ_g-2(σ) + 1 ^ m - B } (by Corollary <ref>) We claim that this equality is equal to min{ Q_ 1 ^ω_ξ_g-1(σ) , B - Q_ω_ξ_g-1(σ)+1 ^ m }. Let k ∈{0, …, g-1} be the largest index such that Q_ω_ξ_k(σ) + 1 ^ m ≥ B. (For ease of notation, let ω_ξ_0(σ) := 0.) We consider two cases: * If no such k exists, i.e. B ≥ Q_1^m, then min{ Q_ 1 ^ω_ξ_g-1(σ) , B - Q_ω_ξ_g-1(σ)+1 ^ m } = Q_ 1 ^ω_ξ_g-1(σ) and Z_ξ_g - 1(σ)=Q_1^ω_ξ_g-1(σ)-0-0-⋯-0=Q_1^ω_ξ_g-1(σ). * Otherwise, if k ≠ g-1, we have that for all h ∈{1, …, k}: Q_ ω_ξ_h(σ) + 1 ^ m ≥B Q_ ω_ξ_h-1(σ) + 1 ^ m - B ≥Q_ ω_ξ_h-1(σ) + 1 ^ m - Q_ ω_ξ_h(σ) + 1 ^ m = Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) min{ Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) , Q_ ω_ξ_h-1(σ) + 1 ^ m - B } = Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) and for h ∈{k + 1, …, g-1}, Q_ ω_ξ_k+1(σ) + 1 ^ m < B Q_ ω_ξ_k(σ) + 1 ^ m - B < Q_ ω_ξ_k(σ) + 1 ^ m - Q_ ω_ξ_k+1(σ) + 1 ^ m = Q_ ω_ξ_k(σ) + 1 ^ ω_ξ_k+1(σ) min{ Q_ ω_ξ_k(σ) + 1 ^ ω_ξ_k+1(σ) , Q_ ω_ξ_k(σ) + 1 ^ m - B } = Q_ ω_ξ_k(σ) + 1 ^ m - B and therefore: Z_ξ_g-1(σ) = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_h=1^g-1 min{ Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) , Q_ ω_ξ_h-1(σ) + 1 ^ m - B } = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_h=1^k min{ Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) , Q_ ω_ξ_h-1(σ) + 1 ^ m - B } - ∑_h=k+1^g-1 min{ Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) , Q_ ω_ξ_h-1(σ) + 1 ^ m - B } = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_h=1^k Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) - ∑_h=k+1^g-1 Q_ ω_ξ_h-1(σ) + 1 ^ m - B = Q_ 1 ^ ω_ξ_g-1(σ) - Q_ 1 ^ ω_ξ_k(σ) - ( Q_ ω_ξ_k(σ) + 1 ^ m - B ) - 0 - …- 0 = B - Q_ ω_ξ_g-1(σ) + 1 ^ m ≥0 On the other hand, Q_1^m ≥B min{ Q_ 1 ^ ω_ξ_g-1(σ) , B - Q_ ω_ξ_g-1(σ)+1 ^ m } = B - Q_ ω_ξ_g-1(σ)+1 ^ m = B - Q_ ω_ξ_g-1(σ)+1 ^ m * Finally, if k = g - 1, Z_ξ_g-1(σ) = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_h=1^g-1 min{ Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) , Q_ ω_ξ_h-1(σ) + 1 ^ m - B } = Q_ 1 ^ ω_ξ_g-1(σ) - ∑_h=1^g-1 Q_ ω_ξ_h-1(σ) + 1 ^ ω_ξ_h(σ) = Q_ 1 ^ ω_ξ_g-1(σ) - Q_ 1 ^ ω_ξ_g-1(σ) = 0 and Q_1^m ≥B min{ Q_ 1 ^ ω_ξ_g-1(σ) , B - Q_ ω_ξ_g-1(σ)+1 ^ m } = B - Q_ ω_ξ_g-1(σ)+1 ^ m = 0 This concludes that: Z_ξ_g - 1(σ) = min{ Q_ 1 ^ω_ξ_g-1(σ) , B - Q_ω_ξ_g-1(σ) + 1 ^ m }. Still using Corollary <ref>, we have: Y_ξ_g(σ) = ∑_ j=1 ^ ω_ξ_g(σ) (b^s_j - τ_j) - ∑_ j=1 ^ ω_ξ_g-1(σ) (b^s_j - τ_j) = Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) - min{ Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) , Q_ ω_ξ_g-1(σ) + 1 ^ m - B } = min{ Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) , B - Q_ ω_ξ_g(σ) + 1 ^ m } Finally, recall that, using our notation, we have by definition: τ = min{ Q_m+1^m+1, Q_1^m+1 - B }. It remains to show that the expressions for τ_ω_ξ_g(σ)^new - τ_ω_ξ_g(σ) and min{ Y_ξ_g(σ), τ - Z_ξ_g-1(σ)} coincide. We first consider three cases and the facts they induce: * Case (a): B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m. Then Z_ξ_g - 1(σ) = 0 (by Equation (<ref>)). Therefore τ- Z_ξ_g-1(σ) = min{ Q_m+1^m+1, Q_1^m+1 - B } (by Equation (<ref>)) = Q_m+1^m+1, because Q_1^m+1 - B = Q_m+1^m+1 + Q_ 1 ^ω_ξ_g-1(σ) + (Q_ω_ξ_g-1(σ) + 1 ^ m - B) ≥ Q_m+1^m+1. Moreover, we have: Y_ξ_g(σ) = min{ Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) , B - Q_ ω_ξ_g(σ) + 1 ^ m } (by Equation (<ref>)) = B - Q_ ω_ξ_g(σ) + 1 ^ m . (because B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m) * Case (b): Q_ω_ξ_g-1(σ) + 1 ^ m < B ≤ Q_1^m. Then τ = Q_m+1^m+1 (by Equation (<ref>)), Z_ξ_g - 1(σ) = B - Q_ω_ξ_g-1(σ) + 1 ^ m (by Equation (<ref>)), and so τ - Z_ξ_g-1(σ) = Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B. Also, Y_ξ_g(σ) = min{ Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) , B - Q_ ω_ξ_g(σ) + 1 ^ m } (by Equation (<ref>)) = Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) (since Q_ω_ξ_g-1(σ) + 1 ^ m < B) * Case (c): Q_1^m < B. Then τ = Q_1^m+1 - B (by Equation <ref>), Z_ξ_g - 1(σ) = Q_ 1 ^ω_ξ_g-1(σ) (by Equation <ref>), and Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ) (by Equation <ref>). We next consider three orthogonal cases, following the expression for τ_ω_ξ_g(σ)^new - τ_ω_ξ_g(σ): * Case 1: B ≤ Q_ω_ξ_g(σ)+1 ^ m. Then Y_ξ_g(σ) = 0 (Equation (<ref>)), and min{ Y_ξ_g(σ), τ - Z_ξ_g-1(σ)} = 0. * Case 2: Q_ω_ξ_g(σ)+1 ^ m < B ≤ Q_ω_ξ_g(σ) + 1 ^ m+1. Let us show that [(i)] * Y_ξ_g(σ) ≤τ - Z_ξ_g-1(σ), and * Y_ξ_g(σ) = min{ Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ) , B - Q_ω_ξ_g(σ) + 1 ^ m }. We consider the subcases defined by (a), (b) and (c): * Case 2(a): Here B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m. * Since τ - Z_ξ_g-1(σ) = Q_m+1^m+1, and Y_ξ_g(σ) = B - Q_ω_ξ_g(σ) + 1 ^ m, we have: B ≤Q_ ω_ξ_g(σ) + 1 ^ m+1 Y_ξ_g(σ) = B - Q_ ω_ξ_g(σ) + 1 ^ m ≤Q_m+1^m+1 = τ- Z_ξ_g-1(σ) * Since B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m, we have: Y_ξ_g(σ) = B - Q_ω_ξ_g(σ) + 1 ^ m ≤ Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ). * Case 2(b): Here Q_ω_ξ_g-1(σ) + 1 ^ m < B ≤ Q_1^m. * Since τ - Z_ξ_g-1(σ) = Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B and Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ), we have: τ- Z_ξ_g-1(σ) = Q_ ω_ξ_g-1(σ) + 1 ^ m+1 - B = Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) + ( Q_ ω_ξ_g(σ) + 1 ^ m+1 - B ) ≥Y_ξ_g(σ). * Since Q_ω_ξ_g-1(σ) + 1 ^ m < B, we have: Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ) ≤ B - Q_ω_ξ_g(σ) + 1 ^ m. * Case 2(c): Here Q_1^m < B. * Since τ = Q_1^m+1 - B, Z_ξ_g - 1(σ) = Q_ 1 ^ω_ξ_g-1(σ), and Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ), we have: τ- Z_ξ_g-1(σ) ≥ Q_ ω_ξ_g-1(σ) + 1 ^ m+1 - B ≥ Q_ ω_ξ_g-1(σ) + 1 ^ m+1 - Q_ ω_ξ_g(σ) + 1 ^ m+1 = Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) = Y_ξ_g(σ) * Since Q_ω_ξ_g-1(σ) + 1 ^ m < B, we have: Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ) ≤ B - Q_ω_ξ_g(σ) + 1 ^ m. Therefore: min{ Y_ξ_g(σ), τ- Z_ξ_g-1(σ) } = Y_ξ_g(σ) (by (i)) = min{ Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) , B - Q_ ω_ξ_g(σ) + 1 ^ m } (by (ii)) = τ_ω_ξ_g(σ)^new - τ_ω_ξ_g(σ) (by Equation (<ref>)) * Case 3: Q_ω_ξ_g(σ) + 1 ^ m+1 < B. Let us show that [(i)] * Y_ξ_g(σ) ≥τ - Z_ξ_g-1(σ), and * τ - Z_ξ_g-1(σ) = min{ Q_m+1^m+1, Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B }. We consider the subcases defined by (a), (b) and (c): * Case 3(a): Here B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m. * Since τ - Z_ξ_g-1(σ) = Q_m+1^m+1, and Y_ξ_g(σ) = B - Q_ω_ξ_g(σ) + 1 ^ m, we have: Q_ ω_ξ_g(σ) + 1 ^ m+1 < B Y_ξ_g(σ) = B - Q_ ω_ξ_g(σ) + 1 ^ m > Q_m+1^m+1 = τ- Z_ξ_g-1(σ) * Since B ≤ Q_ω_ξ_g-1(σ) + 1 ^ m, we have: τ - Z_ξ_g-1(σ) = Q_m+1^m+1≤ Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B. * Case 3(b): Here Q_ω_ξ_g-1(σ) + 1 ^ m < B ≤ Q_1^m. * Since τ - Z_ξ_g-1(σ) = Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B and Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ), we have: Q_ ω_ξ_g(σ) + 1 ^ m+1 < B τ- Z_ξ_g-1(σ) = Q_ ω_ξ_g-1(σ) + 1 ^ m+1 - B < Q_ ω_ξ_g-1(σ) + 1 ^ ω_ξ_g(σ) = Y_ξ_g(σ). * Since Q_ω_ξ_g-1(σ) + 1 ^ m < B, we have: τ - Z_ξ_g-1(σ) = Q_ω_ξ_g-1(σ) + 1 ^ m+1 - B ≤ Q_m+1^m+1. * Case 3(c): Here Q_1^m < B. * Since τ = Q_1^m+1 - B, Z_ξ_g - 1(σ) = Q_ 1 ^ω_ξ_g-1(σ), and Y_ξ_g(σ) = Q_ω_ξ_g-1(σ) + 1 ^ω_ξ_g(σ), we have: τ- Z_ξ_g-1(σ) = Q_1^m+1 - B - Q_ 1 ^ ω_ξ_g-1(σ) = Q_ ω_ξ_g-1(σ) + 1 ^m+1 - B Therefore, either τ - Z_ξ_g-1(σ)=0 or τ - Z_ξ_g-1(σ)=Q_ω_ξ_g-1(σ)+1^m+1 - B. In both cases, we have τ - Z_ξ_g-1(σ)≤ Y_ξ_g(σ) because B > Q_ω_ξ_g(σ) + 1 ^ m+1. * This is shown above. Therefore: min{ Y_ξ_g(σ), τ- Z_ξ_g-1(σ) } = τ- Z_ξ_g-1(σ) (by (i)) = min{ Q_m+1^m+1, Q_ ω_ξ_g-1(σ) + 1 ^ m+1 - B } (by (ii)) = τ_ω_ξ_g(σ)^new - τ_ω_ξ_g(σ) (by Equation (<ref>)) Proof of Equation (<ref>) for ξ_G=f We aim to show that the charging time τ_m^new at the last charging station s_m is τ - Z_f-1(σ). By construction, τ denotes the extra charge required by subpath s_m+1, so: ∑_i=1^mτ_i^new=∑_i=1^m-1τ_i + τ We can replace the values τ_1^new,⋯,τ_m-1^new per Equation (<ref>), which gives: τ_m^new =τ-∑_i=1^m-1 (τ_i^new-τ_i) = τ- ∑_d=1^f-1 min{ Z_d(σ) - Z_d-1(σ), τ- Z_d-1(σ) } = τ- ∑_d=1^f-1 ( min{ Z_d(σ), τ- Z_d-1(σ) + Z_d-1(σ) } - Z_d-1(σ) ), where the second inequality holds because Z_d(σ)=Z_d-1(σ)=0 for all d≤ f-1 such that ω_d(σ) = 0. Recall that κ=maxi ≤ f-1 | τ≥ Z_i(σ). If κ = f - 1, Z_d(σ)≤τ for all d≤ f-1, so τ_m^new = τ - ∑_d=1^f-1(Z_d(σ)-Z_d-1(σ)) = τ - Z_f-1(σ). Otherwise, Z_κ(σ) ≤τ < Z_κ+1(σ) (and in particular τ < Z_f-1(σ)). By separating the sum into d≤κ, d=κ+1 and d≥κ+2, we derive: τ_m^new = τ - ∑_d=1^κ(Z_d(σ)-Z_d-1(σ)) - (τ-Z_κ(σ)) - ∑_d=κ+2^f-10 = 0. Hence, τ_m^new = τ - Z_f-1(σ). This completes the proof of Equation (<ref>). Proof of Equation (<ref>). * Let 1 ≤ d ≤κ≤ f - 1. Define h such that ξ_h ≤ d < ξ_h+1. We have: Z_d(σ⊕s_m+1) = Z_ξ_h(σ⊕s_m+1) = ∑_j=1^ω_ξ_h(σ⊕s_m+1) (b^s_j - τ_j^new) = ∑_j=1^ω_ξ_h(σ) (b^s_j - τ_j^new) = ∑_j=1^ω_ξ_h(σ) (b^s_j - τ_j) - ∑_i=1^h min{ Z_ξ_i(σ) - Z_ξ_i-1(σ), τ- Z_ξ_i-1(σ) } = Z_ξ_h(σ) - ∑_i=1^h ( Z_ξ_i(σ) - Z_ξ_i-1(σ)) = 0 where the third and fourth equalities follow from Equations (<ref>) and (<ref>), the fifth one stems from the fact that τ≥ Z_ξ_i(σ) for all i ≤ h, and the last one follows from telescoping. * Let 1 ≤κ + 1 ≤ d ≤ f - 1. Again define h such that ξ_h ≤ d < ξ_h+1. This implies that κ + 1 ≤ξ_h. Let h' < h be such that ξ_h'≤κ < ξ_h'+1, which implies Z_ξ_h'(σ) ≤τ < Z_ξ_h'+1(σ). We derive: Z_d(σ⊕s_m+1) = Z_ξ_h(σ) - ∑_i=1^h min{ Z_ξ_i(σ) - Z_ξ_i-1(σ), τ- Z_ξ_i-1(σ) } = Z_ξ_h(σ) - ∑_i=1^h' ( Z_ξ_i(σ) - Z_ξ_i-1(σ) ) - (τ- Z_ξ_h'(σ)) - ∑_i=h' + 1^h 0 = Z_ξ_h(σ) - τ * Let f ≤ d ≤ D. Since ω_f(σ⊕ s_m+1) = m, we have Z_f(σ⊕s_m+1) = ∑_j=1^m (b^s_j - τ_j^new) = Q_1^m - Q_1^m+1 - B (applying Lemma <ref>) = min{ Q_1^m, B - Q_m+1^m+1 } This completes the proof of Equation (<ref>), hence of Proposition <ref>. §.§ Proof of Proposition <ref>. We show the following lemma, which elicits the charging cost function shown in Figure <ref>. Note that, in the absence of rebalancing, the extra charging cost would be δ_f ·τ (red line in Figure <ref>); with rebalancing, the extra charging cost is a piece-wise linear, convex function of τ with slopes δ_1,⋯,δ_f. The difference between δ_f ·τ and the function g(·) quantifies the benefits of rebalancing. Let σ∈ be a subpath sequence, and s ∈ be a subpath extension of σ with δ(s) = δ_f for f ∈1,⋯,D. We can write σ⊕ s as: σ⊕ s = σ + s + g(τ; Z_1(σ), …, Z_f-1(σ)) where g is an increasing, piece-wise linear, convex function of τ parametrized by Z_1(σ),⋯,Z_f-1(σ): g(τ; z_1, …, z_f-1) = δ_1 ·τ if τ∈[0, z_1] δ_1 ·z_1 + δ_2 ·(τ- z_1) if τ∈[z_1, z_2] … ∑_d=1^f-1 δ_d ·(z_d - z_d-1) + δ_f ·(τ- z_f-1) if τ∈[z_f-1, ∞) Moreover, if z^1_d ≥ z^2_d for all d=1,⋯,f-1, then g(τ; z^1_1, …, z^1_f-1) ≤ g(τ; z^2_1, …, z^2_f-1). §.§.§ Proof of Lemma <ref>. Per Lemma <ref>, we have: σ⊕s = σ + s + ∑_d=1^f-1 δ_d ·( τ_ω_d(σ)^new - τ_ω_d(σ) ) + δ_f ·τ_m^new = σ + s + ∑_d=1^f-1 δ_d ·min{ Z_d(σ) - Z_d-1(σ), τ- Z_d-1(σ) } + δ_f ·τ- Z_f-1(σ) Recall that κ = max i ∈{0, …, f-1} | τ≥ Z_i(σ). We distinguish two cases: * If κ = f-1, then τ≥ Z_f-1(σ) and σ⊕s = σ + s + ∑_d=1^f-1 δ_d ·( Z_d(σ) - Z_d-1(σ) ) + δ_f ·( τ- Z_f-1(σ) ) * If κ<f-1, then Z_κ(σ) ≤τ < Z_κ+1(σ) and σ⊕s = σ + s + ∑_d=1^κ δ_d ·( Z_d(σ) - Z_d-1(σ) ) + δ_κ+1 ·( τ- Z_κ(σ) ) This proves the reduced cost update. We can write the function g as follows, which proves that it is increasing, piece-wise linear and convex function of τ (see Figure <ref> for an illustration): g(τ; z_1, …, z_f-1) = ∑_d=1^f-1δ_d ·min{ z_d - z_d-1, τ - z_d-1} + δ_f·τ - z_f-1 Next, assume that z^1_d ≥ z^2_d for all d=1,⋯,f-1. Define κ^1 = max i ∈{0, …, f-1} | τ≥ z_i^1 and κ^2 = max i ∈{0, …, f-1} | τ≥ z_i^2 (in particular, κ^2 ≥κ^1). Denoting z^1_0 = z^2_0 = 0, we have: g(τ; z^2_1, …, z^2_f-1) - g(τ; z^1_1, …, z^1_f-1) = ∑_d=1^κ^2+1 (δ_d - δ_d-1) ·(τ- z^2_d-1) - ∑_d=1^κ^1+1 (δ_d - δ_d-1) ·(τ- z^1_d-1) = ∑_d=1^κ^1+1 (δ_d - δ_d-1) ·( (τ- z^2_d-1) - (τ- z^1_d-1) ) + ∑_d=κ^1+2^κ^2+1 (δ_d - δ_d-1) ·(τ- z^2_d-1) ≥ 0 This completes the proof of Lemma <ref>. §.§.§ Proof of Proposition <ref>. We verify that ·, · in Proposition <ref> and ·, · in Proposition <ref> satisfy Property <ref> for EVRP-Het. Otherwise, we proceed as in the proof of Proposition <ref> to verify that Properties <ref> and <ref>–<ref> are satisfied. Proof of Property <ref><ref>. Let σ_1 = { s^1_1, …, s^1_m_1}∈ and σ_2 = { s^2_1, …, s^2_m_2}∈ be such that σ_1 σ_2. Let s be a subpath extension of σ_1 and σ_2, such that δ(s) = δ_f for some f ∈1,⋯,D. Suppose σ_2 ⊕ s is a feasible subpath sequence. As in Lemma <ref>, let us define: τ^1 = min{ b^s, ∑_j=1^m_1 b^s^1_j + b^s - B} and τ^2 = min{ b^s, ∑_j=1^m_2 b^s^2_j + b^s - B} By domination, σ_1≥σ_2, hence ∑_j=1^m_1 b^s^1_j≤∑_j=1^m_1 b^s^2_j. This implies that τ^1≤τ^2, i.e., at most as much charge is required when appending subpath s to σ_1 than to σ_2. We show that -Z_d(σ_1 ⊕ s) ≤ -Z_d(σ_2 ⊕ s) for all d ∈1,⋯,D-1, using Lemma <ref> and the facts that Z_d(σ_1) ≥ Z_d(σ_2), -σ_1≤ -σ_2, and τ^1 ≤τ^2. If d ≤ f-1, we have: Z_d(σ_1 ⊕s) = Z_d(σ_1) - τ^1 ≥Z_d(σ_2) - τ^2 = Z_d(σ_2 ⊕s). If d ≥ f, we separate two cases: If ∑_j=1^m_1 b^s^1_j≥ B - b^s: Z_d(σ_1 ⊕ s) = B - b^s ≥min{∑_j=1^m_2 b^s^2_j, B - b^s } = Z_d(σ_2 ⊕ s) If ∑_j=1^m_1 b^s^1_j < B - b^s: Z_d(σ_1 ⊕ s) = ∑_j=1^m_1 b^s^1_j≥∑_j=1^m_2 b^s^2_j≥min{∑_j=1^m_2 b^s^2_j, B - b^s } = Z_d(σ_2 ⊕ s) We next show that σ_1 ⊕ s≤σ_2 ⊕ s using Lemma <ref>: σ_1 ⊕s = σ_1 + s + g(τ^1 ; Z_1(σ_1), …, Z_f-1(σ_1)) ≤σ_2 + s + g(τ^2 ; Z_1(σ_2), …, Z_f-1(σ_2)) = σ_2 ⊕s The other components of Property <ref><ref> are proved as in Proposition <ref>. Proof of Property <ref><ref>. Let σ = { s_1, …, s_m}∈ be a partial subpath sequence. Let s^1, s^2 ∈ be subpath extensions of σ such that s^1 s^2. Let f ∈1,⋯,D be such that δ(s^1) = δ_f. Suppose that σ⊕ s^2 is a feasible subpath sequence. As in Lemma <ref>, let us define: τ^1 = min{ b^s^1, ∑_j=1^m b^s_j + b^s^1 - B} and τ^2 = min{ b^s^2, ∑_j=1^m b^s_j + b^s^2 - B} Again, by domination, b^s^1≤ b^s^2, so τ^1≤τ^2, i.e., at most as much charge is required when appending subpath s_1 to σ than s_2. We show that -Z_d(σ_1 ⊕ s) ≤ -Z_d(σ_2 ⊕ s) for all d ∈1,⋯,D-1, using Lemma <ref> and the facts that b^s^1≤ b^s^2 and τ^1≤τ^2. If d≤ f-1, we have: Z_d(σ_1 ⊕s) = Z_d(σ) - τ^1 ≥Z_d(σ) - τ^2 = Z_d(σ_2 ⊕s). If d ∈{f, …, D-1}, we have: Z_d(σ_1 ⊕s) = min{ ∑_j=1^m b^s_j, B - b^s^1 } ≥min{ ∑_j=1^m b^s_j, B - b^s^2 } = Z_d(σ_2 ⊕s) We next show that σ_1 ⊕ s≤σ_2 ⊕ s using Lemma <ref>: σ_1 ⊕s = σ + s^1 + g(τ^1 ; Z_1(σ), …, Z_f-1(σ)) ≤σ + s^2 + g(τ^2 ; Z_1(σ), …, Z_f-1(σ)) = σ_2 ⊕s The other components of Property <ref><ref> are proved as in Proposition <ref>. In Lemma <ref>, g(τ; z_1, …, z_f-1) represents the cost of charging τ units of charge. To ensure that g(τ; z^1_1, …, z^1_f-1) ≤ g(τ; z^2_1, …, z^2_f-1) for all τ, it is sufficient (but not necessary) for the breakpoints {z^1_d}_1,⋯,D to be componentwise larger than {z^2_d}_1,⋯,D. In fact, we can simplify the comparison by merely ensuring that z^1_d ≥ z^2_d for all d∈{1,⋯,f-1}, which reduces the domination comparisons without relying on the values of {δ_d}_1,⋯,D. §.§ Proof of Theorem <ref>. §.§.§ Finite termination. First, FindNonDominatedSubpaths terminates finitely. At each iteration, there are finitely many extensions for each partial subpath s (one for each out-neighbor of s). Letting T > 0 be the constant in Property <ref><ref>. The number of partial subpaths added to is bounded by |_R ∪_D| · (1 + |_T| + … + |_T|^⌊ T /min{t_i,j:(i,j)∈}⌋). This proves that FindNonDominatedSubpaths terminates finitely, and := { s ∈| s is a subpath} is finite. Similarly, FindSubpathSequences terminates finitely. At each iteration, there are finitely many extensions of each subpath sequence σ, one for each subpath in {s ∈|σ = s}. Due to Property <ref><ref>, the number of subpath sequences added to is bounded by |_D| · (1 + || + … + ||^⌊ T /min{t_i,j:(i,j)∈}⌋), because min{t_i,j:(i,j)∈} is also a lower bound of the duration of any non-empty subpath. This proves that FindSubpathSequences terminates finitely. §.§.§ First-level output: set of non-dominated subpaths Let ϕ(·) denote the element of []· that satisfies Property <ref><ref> (the time stamp of a subpath, in our implementation). We show that FindNonDominatedSubpaths returns exactly the set of non-dominated subpaths from . First, we show that ⊇, i.e., any feasible and non-dominated partial subpath must belong to . Assume by contradiction that there exists a non-dominated subpath s ∈∖; let us choose the one with the smallest time stamp ϕ(s), which exists per Property <ref><ref>. We then make use of the following observation: Under Property <ref>, if s is a feasible and non-dominated partial subpath and s = s' ⊕ a, then s' is a feasible and non-dominated partial subpath. The proof of the lemma distinguishes two cases. If s' is infeasible, then s is also infeasible. If s' is feasible but dominated, there exists s̅∈ such that s̅ s'; by Property <ref>, s̅⊕ a is feasible and s̅⊕ a s, which contradicts that s is non-dominated. So, let us define s' such that s=s'⊕ a; per the lemma, we have that s' ∈. By Property <ref><ref>, ϕ(s') < ϕ(s), so our construction implies that s' ∈. Consider and at the point in the algorithm where s' is moved from to . Then, s = s' ⊕ a is explored in Step 2 of the algorithm, and added to , and eventually move from to (since it is non-dominated). This is a contradiction, and therefore ⊇. Conversely, we show that ⊆, i.e., any feasible partial subpath s added to is non-dominated. Suppose by contradiction that there exists a partial subpath s ∈∪ and a non-dominated partial subpath s' ∈ such that s' s. As seen earlier, s' is added to at some point of the algorithm and remains in it until it gets added to . Express s' := s”⊕ a; then, we have ϕ(s)≥ϕ(s') (by domination) and ϕ(s')>ϕ(s”) (by Property <ref><ref>). At the iteration where s' is added to , ϕ(s) | s ∈ = ϕ(s”); at the iteration where s is added to , ϕ(s) | s ∈ = ϕ(s). By Property <ref><ref>, ϕ(s) | s ∈ is nondecreasing over the course of the algorithm, so s' is added to before s is added to . This would contradict s ∈, since s' s would either remove s from or prevent s from being added to , and therefore ⊆. §.§.§ Second-level output: set of non-dominated complete subpath sequences. The proof is almost identical to that of the first-level output, with a few modifications. To show that ⊆σ|σ∈, we proceed as for the first-level output by replacing Property <ref><ref> with Property <ref><ref>. To show that ⊇σ|σ∈, we replace Property <ref><ref> with Property <ref><ref> and Lemma <ref> with the following lemma. The distinction is important because, when extending subpaths in the first-level procedure, a single arc can be used between any pair of nodes; however, when extending subpath sequences in the second-level procedure, multiple subpaths can connect the same pair of nodes. Under Property <ref>, if σ is a feasible and non-dominated subpath sequence and σ = σ' ⊕ s, then (i) σ' is a feasible and non-dominated subpath sequence; and (ii) s is a feasible and non-dominated subpath. The proof of the first part is identical to that of Lemma <ref>. The proof of the second part is similar. If s is an infeasible subpath, then σ is an infeasible subpath sequence, a contradiction. If s is a dominated subpath, there exists s̅∈ such that s̅ s; by Property <ref><ref>, σ'⊕s̅∈ and σ'⊕s̅σ'⊕s̅=σ, which contradicts that σ is non-dominated. §.§.§ Finding paths of negative reduced cost paths, if one exists. Assume that a path p ∈⊆ is such that p<0. Let σ∈ be its (complete) subpath sequence. By definition of ·, σ≤p; and by Lemma <ref>, σ≤p since p and σ are complete paths. Hence, σ∈. Assume by contradiction that σ is a dominated subpath sequence; without loss of generality, there exists a non-dominated subpath sequence σ' such that σ' σ. This implies that σ'≤σ via Property <ref><ref> (which also implies Property <ref><ref>), hence, by Lemma <ref>, σ'≤σ<0. In this case, σ'∈ per the above analysis, and σ'∈. This proves that the algorithm returns a path of negative reduced cost. §.§ Proof of Theorem <ref>. Let be a set of paths. We prove the theorem via a set of claims. Claim 1: The set of complete subpath sequences is finite. This follows from similar arguments as those employed in the proof of Theorem <ref>. Indeed, there are finitely many feasible subpaths s ∈ (i.e., subpaths s such that t^s ∈ [0, T] and b^s ∈ [0, B]) because {t(i,j)|(i,j)∈} and {b(i,j)|(i,j)∈} both have positive lower bounds. Similarly, there are finitely many complete subpath sequences σ∈, because the total time of all constituting subpaths lies in [0, T] and {t(i,j)|(i,j)∈} and {b(i,j)|(i,j)∈} are also lower bounds of the set of non-empty subpaths. Claim 2: The minimal path minimizes the cost and the reduced cost for any subpath sequence. Consider a any (complete) feasible subpath sequence σ = {s_1, …, s_m}, σ, with charging time sequence τ^⋆_j :j ∈1,⋯,m-1). Recall that the corresponding minimal path σ is defined as the path that minimzies the reduced cost contribution p. In fact, σ does not depend on the dual variables (κ, μ, ν) across column generation iterations. This follows from Lemma <ref>: σ ∈ σ(κ, μ, ν) | p ∈(σ) = ∑_j=1^m s_j(κ, μ, ν) + ∑_j=1^m-1 δ(s_j) ·τ_j | p ∈(σ) = ∑_j=1^m-1 δ(s_j) ·τ_j | p ∈(σ) This proves the following lemma: For a complete subpath sequence σ∈, σ minimizes p, p and c^p out of all paths in sharing the subpath sequence σ, and does not depend on the dual variables (κ, μ, ν). Claim 3: The semi-infinite () formulation admits an equivalent formulation with a finite number of variables. The preceding lemma relates () to a counterpart, referred to as ' (), restricted to the minimal paths corresponding to all subpath sequences. This new formulation makes use of decision variables z^σ for all subpath sequences σ∈. Since there are finitely many subpath sequences and we consider a single minimal path per subpath sequence, ' () has finitely many variables. Both formulations are given below. () = min ∑_p ∈ c^p z^p ∑_p ∈p=j z^p = v^start_j ∀ j ∈_D ∑_p ∈p=j z^p ≥ v^end_j ∀ j ∈_D ∑_p ∈ip z^p = 1 ∀ i ∈_T z^p ∈_+, ∀ p ∈; p ∈ | z^p > 0 finite '() = min ∑_σ∈ c^σ z^σ ∑_σ∈σ=j z^σ = v^start_j ∀ j ∈_D ∑_σ∈σ=j z^σ≥ v^end_j ∀ j ∈_D ∑_σ∈iσ z^σ = 1 ∀ i ∈_T z^σ∈_+ ∀ σ∈ It remains to show that ' () is equivalent to (). Clearly, any feasible solution of ' () is feasible in () with the same objective value, so the () optimum is at most as large as the ' () optimum. Vice versa, consider a feasible solution { z^p | p ∈} of (). We construct {z^σ|σ∈} as follows (this sum is well-defined since the support of is finite): z^σ = ∑_p ∈(σ) z^p, ∀ σ∈ By construction, this solution is feasible in ' (). In particular: ∑_σ∈iσz^σ =∑_σ∈iσ∑_p ∈(σ) z^p=∑_p ∈ip z^p = 1 The other constraints can be verified similarly. Then, using Lemma <ref>: ∑_p ∈ c^p z^p = ∑_σ∈ ∑_p ∈(σ) c^p z^p ≥∑_σ∈ ∑_p ∈(σ) c^σ z^p = ∑_σ∈ c^σ z^σ Therefore, the ' () optimum is at most as large as the () optimum. This proves that ' () is equivalent to (). Claim 4: ColumnGeneration terminates finitely and converges to an optimal solution of (). This directly follows from the facts that Algorithm <ref> only adds path-variables in '() (per Lemma <ref>), that the set of subpath sequences is finite, and that the ' () and () formulations are equivalent. Finally, we propose a cutting-plane interpretation of our column generation algorithm from the duals of () and '(), referred to as () and '() and given as follows: () = max ∑_j ∈_D v_j^start·κ_j + ∑_j ∈_D v_j^end·μ_j + ∑_i ∈_Tν_i ∑_j ∈_Dp = j·κ_j + ∑_j ∈_Dp = j·μ_j + ∑_i ∈_Tγ^p_i ·ν_i ≤ c^p ∀ p ∈ κ_j ∈ ∀ j ∈_D μ_j ∈^+ ∀ j ∈_D ν_i ∈ ∀ i ∈_T '() = max ∑_j ∈_D v_j^start·κ_j + ∑_j ∈_D v_j^end·μ_j + ∑_i ∈_Tν_i ∑_j ∈_Dσ = j·κ_j + ∑_j ∈_Dσ = j·μ_j + ∑_i ∈_Tγ^σ_i ·ν_i ≤inf c^p | p ∈(σ) ∀ σ∈ κ_j ∈ ∀ j ∈_D μ_j ∈^+ ∀ j ∈_D ν_i ∈ ∀ i ∈_T Just as '() is obtained from () by aggregating path variables according to their subpath sequence, '() is obtained from () by aggregating the constraints along subpath sequences. This is again made possible by Lemma <ref>, which implies that the left-hand side of the constraints are identical for all paths sharing the same subpath sequence. Moreover, the infimum in the first constraint of '() exists and is attained by some path p ∈(σ), since the space of feasible paths p ∈(σ) is isomorphic to the space of feasible charging sequences, which is a polyhedral set (Equation (<ref>)). In fact, per Lemma <ref>, this infimum is attained by σ and is therefore computed by our pricing algorithm. § PROOFS IN SECTION <REF> §.§.§ Preliminaries In the main text, we defined ng-feasibility for subpaths. In fact, ng-feasibility is merely a function of the node sequence of a subpath (or a path), meaning that all subpaths sharing the same node sequence also share the same ng-feasibility properties. Accordingly, we will say interchangeably that a node sequence is ng-feasible or that a subpath is ng-feasible. Similarly, a subpath and its node sequence share the same forward ng-set, so we define the forward ng-set of node sequence U={n_0, ⋯, n_m} as that of its constituting subpaths: Π(s)=Π(U) = n_r | n_r ∈⋂_ρ = r + 1^m N_n_ρ, r ∈{0, ⋯, m-1}∪{n_m} §.§.§ Proof of Lemma <ref>. Suppose that p ∈(^2). Then p is a feasible path, so p ∈. Additionally, let q = {n_0, …, n_m} be the node sequence of p. Suppose that j < k with n_j = n_k. Since q is ng-feasible with respect to ^2, there exists a ℓ with j < ℓ < k such that n_j ∉ N^2_n_ℓ. Since N^1_n_ℓ⊆ N^2_n_ℓ, n_j ∉ N^1_n_ℓ. This shows that q is ng-feasible with respect to ^1, and that p is ng-feasible with respect to ^1. Therefore, (^2) ⊆(^1), and ((^1)) ≤((^2)). Consider an ng-neighborhood , a path p with node sequence U={n_0, …, n_m}, and an arc extension (n_m,n_m+1) ∈. We have: {n_0, …, n_m+1} is ng-feasible w.r.t. U is ng-feasible w.r.t. , and n_m+1∉Π(U) Proof of Proposition <ref>. (⇐) Let 0 ≤ j < k ≤ m+1 be such that n_j = n_k. If k ≠ m+1, then there exists ℓ with j < ℓ < k with n_j ∉ N_n_ℓ, because U is ng-feasible with respect to . If k = m+1, then n_j = n_m+1 and n_j ∉Π(U), so n_j ∉⋂_ρ = j+1^m N_n_ρ. Therefore, there exists ℓ such that j+1≤ l≤ m (i.e., j < ℓ < m+1) such that n_j ∉ N_n_ℓ. Thus, {n_0, …, n_m+1} is ng-feasible with respect to . (⇒) U is clearly ng-feasible with respect to . Assume by contradiction that n_m+1∈Π(U). There exists r ≤ m-1 such that n_r=n_m+1 and n_r ∈⋂_ρ = r+1^m N_n_ℓ. Hence for j=r and k = m+1, there does not exist any j<ℓ<k such that n_j ∉ N_n_ℓ. This implies that {n_0, …, n_m+1} is not ng-feasible with respect to , leading to a contradiction. §.§.§ Proof of Proposition <ref>. We first prove the extension of domination criteria along subpaths: Let s be an ng-feasible partial subpath, and a = (s, ) be an arc extension such that ∉Π(s). Equations (<ref>), (<ref>) and (<ref>) define the forward ng-set, backward ng-set, and ng-residue of subpath s ⊕ a. Proof of Lemma <ref>. Let s be a subpath with ng-feasible node sequence U = {n_0, …, n_m} and a = (n_m, n_m+1) be an arc extension. Since n_m+1∉Π(s), Proposition <ref> implies that s ⊕ a is ng-feasible. We extend the forward ng-set, backward ng-set, and ng-residue as follows: Π(s ⊕ a) = n_r | n_r ∈⋂_ρ = r + 1^m+1 N_n_ρ, r ∈{0, …, m}∪{ n_m+1} = ( n_r | n_r ∈⋂_ρ = r + 1^m N_n_ρ, r ∈{0, …, m-1}∩ N_n_m+1) ∪{ n_m+1} = (Π(s) ∩ N_n_m+1) ∪{ n_m+1} Ω(s ⊕ a) = ⋂_ρ = 0^m+1 N_n_ρ = ⋂_ρ = 0^m N_n_ρ∩ N_n_m+1 = Π(s)∩ N_n_m+1 Π^-1(s ⊕ a) = { n_0 }∪ n_r | n_r ∈⋂_ρ = 0^r-1 N_n_ρ, r ∈{1, …, m+1} = { n_0 }∪ n_r | n_r ∈⋂_ρ = 0^r-1 N_n_ρ, r ∈{1, …, m}∪{n_m+1} if n_m+1∈⋂_ρ = 0^m N_n_ρ { n_0 }∪ n_r | n_r ∈⋂_ρ = 0^r-1 N_n_ρ, r ∈{1, …, m} otherwise = Π^-1(s) ∪( {n_m+1}∩Ω(s) ) This completes the proof of the lemma. We then prove the extension of domination criteria along subpath sequences: Let σ be an ng-feasible subpath sequence, and s be an ng-feasible subpath extension. The extended subpath sequence σ⊕ s is ng-feasible if and only if Π(σ) ∩Π^-1(s) ⊆{s}; then, Equation (<ref>) defines its forward ng-set. Proof of Lemma <ref>. Let σ have node sequence {n_0, …, n_m} and s have node sequence {n_m, …, n_M}. Assume that Π(σ) ∩Π^-1(s) is not included in {n_m}, i.e., there exists n∈Π(σ) ∩Π^-1(s) such that n≠ n_m. Let j<m and k>m be such that n_j = n_k = n. Since n_j ∈Π(σ), n_j∈ N_n_ℓ for all l ∈{j+1, …, m}. Similarly, since n_k ∈Π^-1(s), n_k∈ N_n_ℓ for all l ∈{m, …, k-1}. Therefore, n_j=n_k and n_j∈ N_n_ℓ for all l ∈{j+1, …, k-1}, which proves that σ⊕ s is not ng-feasible. Conversely, if σ⊕ s is ng-feasible, then Π(σ) ∩Π^-1(s) ⊆{s}. Let us now assume that Π(σ) ∩Π^-1(s) ⊆{n_m}, and show that σ⊕ s is ng-feasible. We prove by induction over i = 0, 1, …, M-m that {n_0, …, n_m+i} is ng-feasible. For i = 0, we know that {n_0, …, n_m} is ng-feasible because σ is ng-feasible by assumption. Suppose now that {n_0, …, n_m+i} is ng-feasible. We distinguish two cases regarding n_m+i+1: – If n_m+i+1∈ N_n_m∩…∩ N_n_m+i, then n_m+i+1∈Π^-1(s). Since s is an ng-feasible subpath, we know that n_m+i+1≠ n_m+j for all j ∈{0, …, i}. In particular, n_m+i+1≠ n_m. Since by assumption Π(σ) ∩Π^-1(s) ⊆{n_m}, this implies that n_m+i+1∉Π(σ). We derive, using Lemma <ref>: n_m+i+1 ∉Π({n_0, …, n_m}) n_m+i+1 ∉( Π({n_0, …, n_m}) ∩N_n_m+1 ) ∪{n_m+1} = Π({n_0, …, n_m+1}) … n_m+i+1 ∉( Π({n_0, …, n_m+i-1}) ∩N_n_m+i ) ∪{n_m+i} = Π({n_0, …, n_m+i}) – If n_m+i+1∉ N_n_m∩…∩ N_n_m+i, let j∈{0,…,m} such that n_m+i+1∉ N_n_m+j but n_m+i+1∈ N_n_m+j+1∩…∩ N_n_m+i. Since s is an ng-feasible subpath, n_m+i+1≠ n_m+j, n_m+i+1≠ n_m+j+1, …, n_m+i+1≠ n_m+i. Moreover, by Proposition <ref>, n_m+i+1∉Π({n_0, …, n_m+j). We derive, using Lemma <ref>: n_m+i+1 ∉Π({n_0, …, n_m+j}) n_m+i+1 ∉( Π({n_0, …, n_m+j}) ∩N_n_m+j+1 ) ∪{n_m+j+1} = Π({n_0, …, n_m+j+1}) … n_m+i+1 ∉( Π({n_0, …, n_m+i-1}) ∩N_n_m+i ) ∪{n_m+i} = Π({n_0, …, n_m+i}) In both cases, we have that n_m+i+1∉Π({n_0, …, n_m+i}). By Proposition <ref>, this implies that { n_0, …, n_m+i+1} is ng-feasible. This completes the induction, and proves that σ⊕ s is ng-feasible. We next characterize Π(σ⊕ s). By the definition of Π(·): Π(σ⊕ s) = n_r | n_r ∈⋂_ρ = r + 1^M N_n_ρ, r ∈{0, …, m}∪ n_r | n_r ∈⋂_ρ = r + 1^M N_n_ρ, r ∈{m, …, M-1}∪{n_M } = n_r | n_r ∈⋂_ρ = r + 1^M N_n_ρ, r ∈{0, …, m}∪Π(s) (Lemma <ref>) = ( ( n_r | n_r ∈⋂_ρ = r + 1^m N_n_ρ, r ∈{0, …, m-1}∪{ n_m }) ∩⋂_ρ = m^M N_n_ρ) ∪Π(s) = ( Π(σ) ∩Ω(s) ) ∪Π(s) This completes the proof. Proof of Proposition <ref>. We now show that these choices of domination criteria satisfy Properties <ref> and <ref> for (()). The proof for Properties <ref> and <ref> is identical to Proposition <ref>. Proof of Property <ref>. Let s_1, s_2 be partial ng-feasible subpaths such that s_1 s_2. In particular, Π(s_1) ⊆Π(s_2), Ω(s_1) ⊆Ω(s_2), and Π^-1(s_1) ⊆Π^-1(s_2). Let a = (s_1, ) be a common extension of subpaths s_1 and s_2. Suppose that s_2 ⊕ a is ng-feasible with respect to . This implies that ∉Π(s_2) by Proposition <ref>. Since Π(s_1) ⊆Π(s_2), this implies that ∉Π(s_1) and that s_1 ⊕ a is also ng-feasible with respect to , also by Proposition <ref>. Moreover, per Lemma <ref>: Π(s_1 ⊕ a) = (Π(s_1) ∩ N_) ∪{}⊆ (Π(s_2) ∩ N_) ∪{} = Π(s_2 ⊕ a) Ω(s_1 ⊕ a) = Ω(s_1) ∩ N_⊆Ω(s_2) ∩ N_ = Ω(s_2 ⊕ a) Π^-1(s_1 ⊕ a) = Π^-1(s_1) ∪ ( {}∩Ω(s_1) ) ⊆Π^-1(s_2) ∪ ( {}∩Ω(s_2) ) = Π^-1(s_2 ⊕ a) All other parts of the proof are identical to that in Proposition <ref> for Property <ref>. Proof of Property <ref><ref>. Let σ_1, σ_2 be ng-feasible subpath sequences such that σ_1 σ_2. In particular, Π(σ_1) ⊆Π(σ_2). Let s be a ng-feasible subpath that extends σ_1 and σ_2. Suppose that σ_2 ⊕ s is ng-feasible with respect to . This implies that Π(σ_2) ∩Π^-1(s) ⊆{s} by Lemma <ref>. Since Π(σ_1) ⊆Π(σ_2), this implies that Π(σ_1) ∩Π^-1(s) ⊆{s}, and that σ_1 ⊕ s is also ng-feasible with respect to , also by Lemma <ref>. Moreover, still using Lemma <ref>, we have: Π(σ_1 ⊕s) = Π(s) ∪(Π(σ_1) ∩Ω(s)) ⊆Π(s) ∪(Π(σ_2) ∩Ω(s)) = Π(σ_2 ⊕s) All other parts of the proof are identical to that in Proposition <ref> for Property <ref><ref>. Proof of Property <ref><ref>. Let s_1, s_2 be partial ng-feasible subpaths such that s_1 s_2. In particular, Π(s_1) ⊆Π(s_2), Ω(s_1) ⊆Ω(s_2), and Π^-1(s_1) ⊆Π^-1(s_2). Let σ be a ng-feasible subpath sequence such that s_1 and s_2 both extend σ. Suppose that σ⊕ s_2 is ng-feasible with respect to . This implies that Π(σ) ∩Π^-1(s_2) ⊆{s_1} by Lemma <ref>. Since Π^-1(s_2) ⊆Π^-1(s_1), Π(σ) ∩Π^-1(s_1) ⊆{s}, and σ⊕ s_1 is also ng-feasible with respect to , also by Lemma <ref>. Moreover, still using Lemma <ref>, we have: Π(σ⊕s_1) = Π(s_1) ∪(Π(σ) ∩Ω(s_1)) ⊆Π(s_2) ∪(Π(σ) ∩Ω(s_2)) = Π(σ⊕s_2) All other parts of the proof are identical to that in Proposition <ref> for Property <ref><ref>. §.§.§ ng-feasibility for ERSP-Het. Proposition <ref> provides domination criteria for ERSP-Het that preserve ng-feasibility, combining the labels for (()) derived in Proposition <ref> and the labels for ng-feasibility derived in Proposition <ref>. Properties <ref>, <ref>, <ref> and <ref> for (()) are satisfied with: s = ( s, t^s, b^s, {i ∈Π(s)}_i ∈_C, {i ∈Ω(s)}_i ∈_C, {i ∈Π^-1(s)}_i ∈_C) σ = ( σ, σ, -σ, { -Z_d(σ) }_1,⋯,D-1, {i ∈Π(σ)}_i ∈_C) An extension s ⊕ a of an ng-feasible partial subpath s is ng-feasible if and only if ∉Π(s), where a = (s, ). Similarly, an extension σ⊕ s of an ng-feasible subpath sequence σ is ng-feasible if and only if Π(σ) ∩Π^-1(s) ⊆{s}. The updates are identical to Propositions <ref> and <ref>. § PROOFS IN SECTION <REF> §.§.§ Preliminaries. The lm-SRI coefficients α_(S, M, )(U) for a node sequence U were introduced by <cit.> through Algorithm <ref>. This procedure gives the same quantity as in Definition <ref>: when n_i ∉ M_q, α is reset to 0; when i ∈ I_ℓ, α tracks frac( ∑_i ∈ I_ℓn_i ∈ S_q w_n_i) and the integer part is added to α. Therefore, we get upon termination: α_(S, M, )(p) = ∑_ℓ=1^r⌊∑_i ∈ I_ℓn_i ∈ S_q w_n_i⌋ Again, all coefficients are a function of the node sequence, so we define them equivalently as a function of a path, a subpath or a node sequence. Moreover, we replace the parametrization in S, M and by a parametrization in the cut index q (which implies S_q, M_q and ^q). Lemma <ref> provides a useful expression of the forward and backward lm-SRI resources: Consider a subpath s with node sequence U(s) = { n_0, …, n_m}, and a cut q with parameters S_q ⊆ M_q and ^q. The forward and backward lm-SRI resources satisfy: s = frac( ∑_i=0^m n_i ∈ S_q( ∏_j=i+1^m n_j ∈ M_q) w^q_n_i) s = frac( ∑_i=0^m n_i ∈ S_q( ∏_j=0^i-1n_j ∈ M_q) w^q_n_i) Proof of Lemma <ref>. Let I_1, …, I_r be defined as in Definition <ref>. If n_m ∉ M_q, both quantities in the first equation are equal to 0. If n_m ∈ M_q, note that ∏_j=i+1^m n_j ∈ M_q=1 if and only if n_i+1,⋯,n_m∈ M_q; if in addition n_i∈ S_q, then n_i∈ M_q and therefore i∈ I_r. Therefore: frac ( ∑_i=0^m n_i ∈S_q ( ∏_j=i+1^m n_j ∈M_q ) w^q_n_i ) = frac ( ∑_i=0^m n_i ∈S_q i ∈I_r w^q_n_i ) = frac ( ∑_i ∈I_r n_i ∈S_q w^q_n_i ) = s. We proceed similarly for the backward lm-SRI resource. If n_0 ∉ M_q, both quantities in the second equation are equal to 0. Otherwise: frac ( ∑_i=0^m n_i ∈S_q ( ∏_j=0^i-1 n_j ∈M_q ) w^q_n_i ) = frac ( ∑_i=0^m n_i ∈S_q i ∈I_1 w^q_n_i ) = frac ( ∑_i ∈I_1 n_i ∈S_q w^q_n_i ) = s. §.§.§ Proof of Proposition <ref>. We first show that Equations (<ref>)–(<ref>) define valid updates. We then show that the revised domination criteria given in Equations (<ref>) and (<ref>) satisfy Properties <ref>–<ref>. 1. Equations (<ref>)–(<ref>) define valid updates. Consider a subpath s and an arc extension a such that U(s) = {n_0, …, n_m} and a = (n_m, n_m+1). Let us first prove that Equations (<ref>)–(<ref>) are satisfied: s ⊕a = {n_0, …, n_m+1} = frac ( ∑_i=0^m+1 n_i ∈S_q ( ∏_j=i+1^m+1 n_j ∈M_q ) w^q_n_i ) (by Lemma <ref>) = frac ( n_m+1 ∈M_q ∑_i=0^m n_i ∈S_q ( ∏_j=i+1^m n_j ∈M_q ) w^q_n_i + n_m+1 ∈S_q w^q_n_m+1 ) = frac ( n_m+1 ∈M_q s + n_m+1 ∈S_q w^q_n_m+1 ) = 0 if n_m+1 ∉M_q frac( s + n_m+1 ∈S_q w^q_n_m+1 ) if n_m+1 ∈M_q s ⊕a = {n_0, …, n_m+1} = frac ( ∑_i=0^m+1 n_i ∈S_q ( ∏_j=0^i-1 n_j ∈M_q ) w^q_n_i ) (by Lemma <ref>) = frac ( ∑_i=0^m n_i ∈S_q ( ∏_j=0^i-1 n_j ∈M_q ) w^q_n_i + n_m+1 ∈S_q ( ∏_j=0^m n_j ∈M_q ) w^q_n_m+1 ) = frac ( s + n_m+1 ∈S_q U(s) ⊆M_q w^q_n_m+1 ) As a corollary, we obtain Equation (<ref>) by noting that the reduced cost contribution s ⊕ a is decremented by λ_q for all cuts q such that s ⊕ a hits 1, i.e., if s + w_n_m+1^q ≥ 1 and ∈ S_q. Next, consider a subpath sequence σ and a subpath s such that U(σ) = {n_0, …, n_m} and U(s) = {n_m, …, n_m'}. The following decomposition proves Equation (<ref>): σ⊕s = frac ( ∑_i=0^m' n_i ∈S_q ( ∏_j=i+1^m' n_j ∈M_q ) w^q_n_i ) = frac [ ( ∏_j=m^m' n_j ∈M_q )·∑_i=0^m-1 n_i ∈S_q ( ∏_j=i+1^m-1 n_j ∈M_q ) w^q_n_i + ∑_i=m^m' n_i ∈S_q( ∏_j=i+1^m' n_j ∈M_q ) w^q_n_i ] = frac [ ( ∏_j=m^m' n_j ∈M_q )·∑_i=0^m n_i ∈S_q ( ∏_j=i+1^m n_j ∈M_q ) w^q_n_i + ∑_i=m^m' n_i ∈S_q( ∏_j=i+1^m' n_j ∈M_q ) w^q_n_i ] = frac ( σ U(s) ⊆M_q + s ) The first equality follows from Lemma <ref>. The second one comes from re-arranging the terms. In the third one, the update of the first “m-1” is due to the fact that n_m∉ S_q because n_m∉_T, and the update of the second “m-1” is due to the fact that if the first product is equal to 1 then n_m∈ M_q. The last equality stems from the additivity of the frac(·) function. Turning to Equation (<ref>), define the sets I_1, …, I_M for U(σ⊕ s) as in Definition <ref>. Let us focus on the update in the last term, namely λ_q σ + s≥ 1. Specifically, we prove that: α_q(σ⊕ s)=α_q(σ) + α_q(s) + σ + s≥ 1 We distinguish two cases: * If n_m ∉ M_q, let L be the largest index such that I_L ⊆ U(σ) (0 if none exists). Then I_L+1⊆ U(s) since n_m ∉ M_q. Therefore, we have: α_q(σ⊕s) = ∑_ℓ=1^r ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋ = ∑_ℓ=1^L ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋+ ∑_ℓ=L+1^r ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋ = α_q(σ) + α_q(s) Moreover, since n_m ∉ M_q, we have that σ=s=0, which proves the desired property. * If n_m ∈ M_q, there exists ℓ such that m ∈ I_L. Recall that n_m∉ S_q because n_m∉_T. We have: α_q(σ) = ∑_ℓ=1^L-1 ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋+ ⌊∑_i ∈I_L: i ≤m n_i ∈S_q w_n_i ⌋ α_q(s) = ⌊∑_i ∈I_L: i ≥m n_i ∈S_q w_n_i ⌋+ ∑_ℓ=L+1^r ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋ α_q(σ⊕s) = ∑_ℓ=1^r ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋ = ∑_ℓ=1^L-1 ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋+ ∑_ℓ=L+1^r ⌊∑_i ∈I_ℓ n_i ∈S_q w_n_i ⌋ + ⌊∑_i ∈I_L: i ≤m n_i ∈S_q w_n_i⌋+ ⌊∑_i ∈I_L: i ≥m n_i ∈S_q w_n_i ⌋ + frac( ∑_i ∈I_L: i ≤m n_i ∈S_q w_n_i) + frac(∑_i ∈I_L: i ≥m n_i ∈S_q w_n_i ) ≥1 = α_q(σ) + α_q(s) + σ + s ≥1 This proves that α_q(σ⊕ s)=α_q(σ) + α_q(s) + σ + s≥ 1.Therefore, the number of decrements of λ_q from U(σ⊕ s) is equal to the number of decrements from U(σ) and U(s), and an extra one if σ + s≥ 1. We conclude that: σ⊕ s = σ + δ·τ + s - ∑_q ∈λ_q σ + s≥ 1 2. The domination criteria given in Equations (<ref>) and (<ref>) satisfy Properties <ref>–<ref>. Property <ref>. Let s_1, s_2 ∈, and let a = (s_1, ) be a common extension. We show that s_1 s_2 implies s_1 ⊕ a s_2 ⊕ a. We partition according to Table <ref>. Note that U(s_1) ⊆ M_q for q∈_2∪_4, so s_1 = s_1 and we denote them as s_1 for convenience. Similarly, U(s_2) ⊆ M_q for _3 and _4, so s_1 = s_2 and we denote them as s_2 for convenience. We introduce a similar partition upon the subpath extension with prime superscipts, e.g.: _1' := q | U(s_1 ⊕ a) ⊈M_q, U(s_2 ⊕ a) ⊈M_q, _2' := q | U(s_1 ⊕ a) ⊆ M_q, U(s_2 ⊕ a) ⊈M_q, _3' := q | U(s_1 ⊕ a) ⊈M_q, U(s_2 ⊕ a) ⊆ M_q, _4' := q | U(s_1 ⊕ a) ⊆ M_q, U(s_2 ⊕ a) ⊆ M_q. By domination, we have c^s_1 = s_1 - ∑_q ∈ λ_q U(s_1) ⊈M_q, U(s_2) ⊈M_q ( s_1 > s_2 + s_1 > s_2 ) - ∑_q ∈ λ_q U(s_1) ⊈M_q, U(s_2) ⊆M_q ( s_1 > s_2, s_1 > s_2, s_1 > s_2 + s_2 - s_1 ≤s_2, s_1 ≤s_2, s_1 ≤s_2 + s_2 - 1 + 1 ) - ∑_q ∈ λ_q U(s_1) ⊆M_q, U(s_2) ⊈M_q ( s_1 > s_2, s_1 > s_2, s_1 + s_1 - 1 > s_2 - s_1 ≤s_2, s_1 ≤s_2, s_1 + s_1 ≤s_2 + 1 ) - ∑_q ∈ λ_q U(s_1) ⊆M_q, U(s_2) ⊆M_q ( s_1 > s_2 ) ≤s_2 (Equation (<ref>)) With these notations, the revised domination criterion satisfies: c^s_1 ⊕a = s_1 ⊕a - ∑_q ∈'_1 λ_q ( s_1 ⊕a > s_2 ⊕a + q ∈'_1a ) - ∑_q ∈_2' λ_q ( 2 q ∈'_2a + q ∈'_2b + q ∈'_2c + q ∈'_2d + q ∈'_2e ) - ∑_q ∈_3' λ_q ( 2 q ∈'_3a + q ∈'_3b + q ∈'_3c + q ∈'_3d + q ∈'_3e ) - ∑_q ∈_4' λ_q q ∈'_4a = s_1 + c(s_1, ) - ∈_T ν_ - ∈_D μ_ - ∑_q ∈'_1 λ_q ( s_1 ⊕a > s_2 ⊕a + q ∈'_1a ) - ∑_q ∈_2' λ_q ( 2 q ∈'_2a + q ∈'_2b + q ∈'_2c + q ∈'_2d + q ∈'_2e ) - ∑_q ∈_3' λ_q ( 2 q ∈'_3a + q ∈'_3b + q ∈'_3c + q ∈'_3d + q ∈'_3e ) - ∑_q ∈_4' λ_q q ∈'_4a - ∑_q ∈ λ_q s_1 + w_^q ≥1 ∈S_q (by Equation (<ref>)) ≤s_2 + c(s_2, ) - ∈_T ν_ - ∈_D μ_ - ∑_q ∈'_1 λ_q ( s_1 ⊕a > s_2 ⊕a + q ∈'_1a ) - ∑_q ∈_2' λ_q ( 2 q ∈'_2a + q ∈'_2b + q ∈'_2c + q ∈'_2d + q ∈'_2e ) - ∑_q ∈_3' λ_q ( 2 q ∈'_3a + q ∈'_3b + q ∈'_3c + q ∈'_3d + q ∈'_3e ) - ∑_q ∈_4' λ_q q ∈'_4a - ∑_q ∈ λ_q s_1 + w_^q ≥1 ∈S_q + ∑_q ∈_1 λ_q ( s_1 > s_2 + q ∈_1a ) + ∑_q ∈_2 λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e ) + ∑_q ∈_3 λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e ) + ∑_q ∈_4 λ_q q ∈_4a (by Equation (<ref>), since s_1 s_2) = s_2 ⊕a - ∑_q ∈'_1 λ_q ( s_1 ⊕a > s_2 ⊕a + q ∈'_1a ) + ∑_q ∈_1 λ_q ( s_1 > s_2 + q ∈_1a ) - ∑_q ∈_2' λ_q ( 2 q ∈'_2a + q ∈'_2b + q ∈'_2c + q ∈'_2d + q ∈'_2e ) + ∑_q ∈_2 λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e ) - ∑_q ∈_3' λ_q ( 2 q ∈'_3a + q ∈'_3b + q ∈'_3c + q ∈'_3d + q ∈'_3e ) + ∑_q ∈_3 λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e ) - ∑_q ∈_4' λ_q q ∈'_4a + ∑_q ∈_4 λ_q q ∈_4a - ∑_q ∈ λ_q s_1 + w_^q ≥1 ∈S_q + ∑_q ∈ λ_q s_2 + w_^q ≥1 ∈S_q (by Equation (<ref>)) = s_2 ⊕a+∑_q∈f(q), where f(q) is defined as the sum of the 10 terms in the second-to-last expression. It remains to show that this expression is not greater than s_2 ⊕ a. We further partition into ^A = q | ∉ M_q, ^B = q | ∈ M_q ∖ S_q and ^C = q | ∈ S_q, and show that ∑_q ∈^A f(q) ≤ 0, ∑_q ∈^B f(q) = 0, and ∑_q ∈^C f(q) = 0 . * For q ∈^A, ∉ M_q implies that q ∈'_1, that s_1 ⊕ a = s_2 ⊕ a = 0, and that s_1 ⊕ a = s_1 and s_2 ⊕ a = s_2. Therefore we obtain: ∑_q ∈^A f(q) = - ∑_q ∈^A ∩'_1 λ_q s_1 > s_2 + ∑_q ∈^A ∩_1 λ_q ( s_1 > s_2 + q ∈_1a ) + ∑_q ∈^A ∩_2 λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e ) + ∑_q ∈^A ∩_3 λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e ) + ∑_q ∈^A ∩_4 λ_q q ∈_4a For each q ∈^A such that s_1≤s_2, the first term is equal to zero and all subsequent terms are non-positive (because λ_q ≤ 0 for all q∈). Consider q ∈^A such that s_1 > s_2. If q ∈_1, then the first two terms sum up to a non-positive quantity; if q ∈_2, then q ∈_2a∪_2b∪_2c and the first and third terms sum up to a non-positive quantity; if q ∈_3, then q ∈_3a∪_3b∪_3c and the first and fourth terms sum up to a non-positive quantity; and if q ∈_4, then q ∈_4a and the first and last terms sum up to a non-positive quantity. Leveraging again the fact that λ_q ≤ 0 for all q ∈, this proves that: ∑_q ∈^A f(q) ≤ 0 * For q ∈^B, ∈ M_q ∖ S_q implies that _1' ∩^B = _1∩^B, ⋯, _4' ∩^B = _4∩^B (and the same holds for the sub-partitions), and that s_1 ⊕ a = s_1, s_1 ⊕ a = s_1, s_2 ⊕ a = s_2 and s_2 ⊕ a = s_2. Therefore, the first eight terms of f(q) cancel each other out, and the last two terms are equal to zero, hence: ∑_q ∈^B f(q) = 0 * For q ∈^C, ∈ S_q⊆ M_q implies that _1' ∩^C = _1∩^C, ⋯, _4' ∩^C = _4∩^C. For q ∈'_1 ∩^C, we have s_1 ⊕ a = s_1 and s_2 ⊕ a = s_2 per Equation (<ref>). Therefore: - ∑_q ∈'_1∩^Cλ_q ( s_1 ⊕ a > s_2 ⊕ a) + ∑_q ∈_1∩^Cλ_q ( s_1 > s_2) = 0 Moreover: ( - s_1 + w_^q ≥1 + s_2 + w_^q ≥1 ) = -1 if 1 - s_1≤ w^q_ < 1 - s_2; +1 if 1 - s_2≤ w^q_< 1 - s_1; 0 otherwise. Now, note that, per Equation (<ref>): ( - q ∈'_1a + q ∈_1a) = +1 if 1 - s_1≤ w^q_< 1 - s_2; -1 if 1 - s_2≤ w^q_< 1 - s_1; 0 otherwise. We obtain: ∑_q ∈^C ∩_1 f(q) = 0 Proceeding similarly but omitting details for conciseness, we have: ∑_q ∈^C ∩_4 f(q) = 0 Turning to _2, we have, using Equations (<ref>) and (<ref>): s_1 ⊕ a = s_1 ⊕ a = frac(s_1 + w^q_), s_2 ⊕ a = frac(s_2 + w^q_) and s_2 ⊕ a = s_2. We define the following sub-sub-partition, based on the value of w_^q in [0, 1): partition of _2a: _2a1: w_^q ∈ [0, 1 - s_1) _2a2: w_^q ∈ [1 - s_1, 1 + s_2 - s_1) _2a3: w_^q ∈ [1 + s_2 - s_1, 1 - s_2) _2a4: w_^q ∈ [1 - s_2, 1) partition of _2b: _2b1: w_^q ∈ [0, 1 - s_1) _2b2: w_^q ∈ [1 - s_1, 1 - s_2) _2b3: w_^q ∈ [1 - s_2, 1 + s_2 - s_1) _2b4: w_^q ∈ [1 + s_2 - s_1, 1) partition of _2c: _2c1: w_^q ∈ [0, 1 - s_2) _2c2: w_^q ∈ [1 - s_2, 1 - s_1) _2c3: w_^q ∈ [1 - s_1, 1 + s_2 - s_1) _2c4: w_^q ∈ [1 + s_2 - s_1, 1) partition of _2d: _2d1: w_^q ∈ [0, s_2 - s_1) _2d2: w_^q ∈ [s_2 - s_1, 1 - s_1) _2d3: w_^q ∈ [1 - s_1, 1 - s_2) _2d4: w_^q ∈ [1 - s_2, 1) partition of _2e: _2e1: w_^q ∈ [0, s_2 - s_1) _2e2: w_^q ∈ [s_2 - s_1, 1 - s_2) _2e3: w_^q ∈ [1 - s_2, 1 - s_1) _2e4: w_^q ∈ [1 - s_1, 1) partition of _2f: _2f1: w_^q ∈ [0, 1 - s_2) _2f2: w_^q ∈ [1 - s_2, s_2 - s_1) _2f3: w_^q ∈ [s_2 - s_1, 1 - s_1) _2f4: w_^q ∈ [1 - s_1, 1) With this notation, we obtain: 2 ( - q ∈'_2a + q ∈_2a) = 2 q ∈_2a2 + 2 q ∈_2a3 - 2 q ∈_2c2 - 2 q ∈_2e3 ( - q ∈'_2b + q ∈_2b) = q ∈_2b2 + q ∈_2b3 - q ∈_2d2 - q ∈_2f3 ( - q ∈'_2c + q ∈_2c) = q ∈_2c2 + q ∈_2c3 - q ∈_2e2 - q ∈_2a3 ( - q ∈'_2d + q ∈_2d) = q ∈_2d2 + q ∈_2d3 - q ∈_2f2 - q ∈_2b3 ( - q ∈'_2e + q ∈_2e) = q ∈_2e2 + q ∈_2e3 - q ∈_2a2 - q ∈_2c3 We can also re-write: ( - s_1 + w_^q ≥ 1 + s_2 + w_^q ≥ 1) = q ∈_2c2 + q ∈_2e3 + q ∈_2f2 + q ∈_2f3 - q ∈_2a2 - q ∈_2a3 - q ∈_2b2 - q ∈_2d3 We obtain: ∑_q ∈^C ∩_2 f(q) = 0 We proceed similarly but omit details for conciseness, and derive: ∑_q ∈^C ∩_3 f(q) = 0 This completes the proof of the statement. All other parts are identical to the proof in Proposition <ref> for Property <ref>. Property <ref><ref> Let σ_1, σ_2 be subpath sequences and s ∈() be a common subpath extension. We show that σ_1 σ_2 implies σ_1 ⊕ s σ_2 ⊕ s. Define τ_1 := b^s - σ_1 and τ_2 := b^s - σ_1. σ_1 ⊕s - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s = σ_1 + δ·τ_1 + s - ∑_q ∈ λ_q σ_1 + s ≥1 - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s ≤ σ_2 + δ·τ_2 + s + ∑_q ∈ λ_q σ_1 > σ_2 - ∑_q ∈ λ_q σ_1 + s ≥1 - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s = σ_2 ⊕s + ∑_q ∈ λ_q σ_2 + s ≥1 + ∑_q ∈ λ_q σ_1 > σ_2 - ∑_q ∈ λ_q σ_1 + s ≥1 - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s, where the first and last equalities follow from Equation (<ref>) and the equality comes from Equation (<ref>) (since σ_1 σ_2). We distinguish two cases: * If U(s) ⊈M_q, then σ_1 ⊕ s = σ_2 ⊕ s = s (Equation (<ref>)). Moreover, if σ_1 + s≥ 1, then σ_2 + s≥ 1 and/or σ_1 > σ_2. Since λ_q ≤ 0, this implies: σ_1 ⊕ s - ∑_q ∈λ_q σ_1 ⊕ s > σ_2 ⊕ s ≤ σ_2 ⊕ s + ∑_q ∈λ_q σ_2 + s≥ 1 + ∑_q ∈λ_q σ_1 > σ_2 - ∑_q ∈λ_q σ_1 + s≥ 1 ≤ σ_2 ⊕ s * If U(s) ⊆ M_q, then s = s (Lemma <ref>) and we denote them as s for convenience. Per Equation (<ref>), we have σ_1 ⊕ s = frac(σ_1 + s) and σ_2 ⊕ s = frac(σ_2 + s). We obtain: σ_2 + s≥ 1 - σ_1 + s≥ 1 = 1 if 1 - σ_2≤s < 1 - σ_1; -1 if 1 - σ_1≤s < 1 - σ_2; 0 otherwise. σ_1 > σ_2 - σ_1 ⊕ s > σ_2 ⊕ s = -1 if 1 - σ_2≤s < 1 - σ_1; 1 if 1 - σ_1≤s < 1 - σ_2; 0 otherwise. We conclude: σ_1 ⊕s - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s ≤ σ_2 ⊕s + ∑_q ∈ λ_q σ_2 + s ≥1 + ∑_q ∈ λ_q σ_1 > σ_2 - ∑_q ∈ λ_q σ_1 + s ≥1 - ∑_q ∈ λ_q σ_1 ⊕s > σ_2 ⊕s = σ_2 ⊕s All other parts are identical to the proof in Proposition <ref> for Property <ref><ref>. Property <ref><ref> Let σ be a subpath sequence and s_1, s_2 subpaths that extend σ. We show that s_1 s_2 implies that σ⊕ s_1 σ⊕ s_2. We consider the partition of from Table <ref>. Defining τ_1 := b^s_1 - σ and τ_2 := b^s_2 - σ, we have, using Equations (<ref>) and (<ref>): σ⊕s_1 - ∑_q ∈ λ_q σ⊕s_1 > σ⊕s_2 = σ + δ·τ_1 + s_1 - ∑_q ∈ λ_q σ + s_1 ≥1 - ∑_q ∈ λ_q σ⊕s_1 > σ⊕s_2 ≤σ + δ·τ_2 + s_2 + ∑_q ∈_1 λ_q ( + s_1 > s_2 + s_1 > s_2 ) + ∑_q ∈_2 λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e ) + ∑_q ∈_3 λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e ) + ∑_q ∈_4 λ_q q ∈_4a - ∑_q ∈ λ_q σ + s_1 ≥1 - ∑_q ∈ λ_q σ⊕s_1 > σ⊕s_2 = σ⊕s_2 + ∑_q ∈ λ_q σ + s_2 ≥1 + ∑_q ∈_1 λ_q ( + s_1 > s_2 + s_1 > s_2 ) + ∑_q ∈_2 λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e ) + ∑_q ∈_3 λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e ) + ∑_q ∈_4 λ_q q ∈_4a - ∑_q ∈ λ_q σ + s_1 ≥1 - ∑_q ∈ λ_q σ⊕s_1 > σ⊕s_2 = σ⊕s_2 + ∑_q ∈ g(q), where g(q) is defined as the sum of the 7 terms in the second-to-last expression. It remains to show that ∑_q∈g(q)≤ 0. * For q ∈_1, we have σ⊕ s_1 = s_1 and σ⊕ s_2 = s_2 (Equation (<ref>)). Thus: ∑_q ∈_1 g(q) = ∑_q ∈_1 λ_q σ + s_2 ≥1 - ∑_q ∈_1 λ_q σ + s_1 ≥1 + ∑_q ∈_1 λ_q s_1 > s_2 Note that σ + s_2≥ 1-σ + s_1≥ 1 = -1 iff 1-s_1≤σ < 1 - s_2, which implies that s_2 < s_1 and therefore that s_1 > s_2. This directly implies: ∑_q ∈_1 g(q) ≤ 0 * For q ∈_2, we have s_1 = s_1 = s_1 (Lemma <ref>), σ⊕ s_1 = frac(σ + s_1) and σ⊕ s_2 = s_2 (Equation (<ref>)). Hence: ∑_q ∈_2 g(q) = ∑_q ∈_2λ_q σ + s_2≥ 1 + ∑_q ∈_2λ_q ( 2 q ∈_2a + q ∈_2b + q ∈_2c + q ∈_2d + q ∈_2e) - ∑_q ∈_2λ_q σ + s_1≥ 1 - ∑_q ∈_2λ_q frac(σ + s_1) > s_2 Clearly, ∑_q∈_2ag(q)≤ 0. The following conditions are equivalent to σ + s_2≥ 1 - σ + s_1≥ 1 - frac(σ + s_1) > s_2 = -2: σ < 1 - s_2 σ≥ 1 - s_1 σ > 1 + s_2 - s_1 This implies that s_1 > s_2; s_1 > s_2; and s_2 + s_2 < s_1, hence that q ∈_2a. Therefore, σ + s_2≥ 1 - σ + s_1≥ 1 - frac(σ + s_1) > s_2≥ -1 for all q ∈_2b∪_2c∪_2d∪_2e. Therefore: ∑_q ∈_2b∪_2c∪_2d∪_2e g(q) ≤ 0. Finally, for q∈_2f: – If σ + s_2 < 1, then σ + s_1 < 1 (because s_1≤s_2); and σ + s_1≤σ + s_2 + s_2 - 1 ≤s_2 (because s_1≤s_2 + s_2 - 1), so frac(σ + s_1) > s_2 = 0. – If σ + s_2≥ 1 and σ + s_1≥ 1, then σ + s_1≤σ + s_2≤ 1 + s_2 (because s_1≤s_2) so again frac(σ + s_1) > s_2 = 0. We obtain: ∑_q ∈_2f g(q) = 0. This concludes that ∑_q ∈_2 g(q) ≤ 0 * For q ∈_3, we have s_2 = s_2 = s_2, σ⊕ s_1 = s_1, and σ⊕ s_2 = frac(σ + s_2). Hence: ∑_q∈_3g(q)= ∑_q ∈_3λ_q σ + s_2≥ 1 + ∑_q ∈_3λ_q ( 2 q ∈_3a + q ∈_3b + q ∈_3c + q ∈_3d + q ∈_3e) - ∑_q ∈_3λ_q σ + s_1≥ 1 - ∑_q ∈_3λ_q s_1 > frac(σ+s_2) Using similar and symmetric arguments as in the case of q ∈_2, we show that σ + s_2≥ 1 - σ + s_1≥ 1 - s_1 > frac(σ+s_2) is at least equal to -1 for all q ∈_3b∪_3c∪_3d∪_3e, and equal to 0 for all q ∈_3f. Therefore: ∑_q ∈_3 g(q) ≤ 0 * Finally, for q ∈_4, we have s_1 = s_1 = s_1; s_2 = s_2 = s_2; σ⊕ s_1 = frac(σ + s_1) and σ⊕ s_2 = frac(σ + s_2). Hence: ∑_q∈_4g(q)= ∑_q ∈_4λ_q σ + s_2≥ 1 + ∑_q ∈_4λ_q q ∈_4a - ∑_q ∈_4λ_q σ + s_1≥ 1 - ∑_q ∈_4λ_q frac(σ+s_1) > frac(σ+s_2) Assume that σ + s_2 < 1 and σ + s_1≥ 1. Then, the last term is equal to zero because σ + s_1≤ 1 + σ≤ 1 + σ + s_2. This proves that ∑_q ∈_4a g(q) ≤ 0 Next, we show that ∑_q ∈_4b g(q) ≤ 0. Indeed, for any q∈_4b: – If σ + s_2 < 1, then σ + s_1 < 1 and σ + s_1≤σ + s_2 (because s_1≤s_2). – If σ + s_2≥ 1 and σ + s_1≥ 1, then σ + s_1 - 1 ≤σ + s_2 - 1 (because s_1≤s_2). This concludes that ∑_q ∈_4 g(q) ≤ 0 All other parts are identical to the proof of Proposition <ref> for Property <ref><ref>. §.§.§ lm-SRI cuts for ERSP-Het. Proposition <ref> provides domination criteria for ERSP-Het that preserve ng-feasibility and ensure consistency with the lm-SRI cuts, combining Proposition <ref>, Proposition <ref>, and Proposition <ref>. Properties <ref>, <ref>, <ref> and <ref> for (()) are satisfied with the domination criteria from Proposition <ref>, after replacing s_1≤s_2 in the definition of s_1 s_2 with Equation (<ref>) and the condition σ_1≤σ_2 in the definition of σ_1 σ_2 with Equation (<ref>). The updates are identical to Propositions <ref>, <ref> and <ref>, except that Equation (<ref>) is replaced by: σ⊕ s = σ + s + ∑_d=1^f-1δ_d · (τ_ω_d^new - τ_ω_d) + δ_f ·τ_m^new - ∑_q ∈λ_q α_q(σ) + α_q(s) ≥ 1 §.§ Proof of Theorem <ref>. We first show that Steps 1–3 of Algorithm <ref> returns an optimal solution to () in a finite number of iterations. Note that in Algorithm <ref>, the ng-neighborhoods ^t used across iterations are nested: for all t, ^t and ^t+1 satisfy N_i^t ⊆ N_i^t+1 for all i ∈, and the inclusion is strict for at least one i. Per Lemma <ref>, (^t) ⊇(^t+1) and ((^t)) ≤((^t+1)). Next, consider a non-elementary path p in the support of the incumbent solution z^t. That path admits a cycle {i, n_0, …, n_m, i} in U(p), with i ∈_T. Then, the addition of i to N_n_0, …, N_n_m results in p no longer being ng-feasible for ^t+1 and hence for any subsequent ng-neighborhood. Therefore, the quantity ∑_i ∈ |N_i^t| takes integer values, is strictly increasing as t increases, and is upper-bounded by ||^2. This proves that there exists some iteration t_1 at which all paths in the support of z^t_1 are elementary, so that z^t_1 is a feasible solution to () with optimal value (). Next, let t_1, t_2, … indicate the iterations in which Step 4 is reached. Since each cut separates z^t_k from the feasible set of the relaxation, the sequence of cuts defines a sequence of nested relaxations with objective values () = ^t_1≤^t_2≤…≤(). Furthermore, the family of lm-SRI cuts such that |S| = 3, S ⊆ M, and w_i = 1/2 is finite. Thus, Algorithm <ref> terminates in a finite number of iterations and its optimum satisfies () ≤≤().
http://arxiv.org/abs/2407.02893v2
20240703081316
An Uncertainty-guided Tiered Self-training Framework for Active Source-free Domain Adaptation in Prostate Segmentation
[ "Zihao Luo", "Xiangde Luo", "Zijun Gao", "Guotai Wang" ]
cs.CV
[ "cs.CV" ]
UGTST for Active Source-free Domain Adaptation Z. Luo et al. ^1School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China ^2Shanghai AI Lab, Shanghai, China ^3School of Mathematical Sciences, Harbin Engineering University, Harbin, China. ^4Department of Computer Science and Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong. guotai.wang@uestc.edu.cn An Uncertainty-guided Tiered Self-training Framework for Active Source-free Domain Adaptation in Prostate Segmentation Zihao Luo1,3 Xiangde Luo1,2 Zijun Gao4 Guotai Wang1,2 ======================================================================================================================== Z. Luo and X. Luo contributed equally to this work. § ABSTRACT Deep learning models have exhibited remarkable efficacy in accurately delineating the prostate for diagnosis and treatment of prostate diseases, but challenges persist in achieving robust generalization across different medical centers. Source-free Domain Adaptation (SFDA) is a promising technique to adapt deep segmentation models to address privacy and security concerns while reducing domain shifts between source and target domains. However, recent literature indicates that the performance of SFDA remains far from satisfactory due to unpredictable domain gaps. Annotating a few target domain samples is acceptable, as it can lead to significant performance improvement with a low annotation cost. Nevertheless, due to extremely limited annotation budgets, careful consideration is needed in selecting samples for annotation. Inspired by this, our goal is to develop Active Source-free Domain Adaptation (ASFDA) for medical image segmentation. Specifically, we propose a novel Uncertainty-guided Tiered Self-training (UGTST) framework, consisting of efficient active sample selection via entropy-based primary local peak filtering to aggregate global uncertainty and diversity-aware redundancy filter, coupled with a tiered self-learning strategy, achieves stable domain adaptation. Experimental results on cross-center prostate MRI segmentation datasets revealed that our method yielded marked advancements, with a mere 5% annotation, exhibiting an average Dice score enhancement of 9.78% and 7.58% in two target domains compared with state-of-the-art methods, on par with fully supervised learning. Code is available at: <https://github.com/HiLab-git/UGTST>. § INTRODUCTION Automatic and accurate delineation of the prostate plays an important role in assisting the diagnosis and treatment of prostate diseases. Despite that deep learning models have achieved remarkable performance on this task<cit.>, they often struggle to generalize well when confronted with gaps between training and testing data<cit.>. To tackle this issue, Domain Adaptation (DA) methods emerge as a promising solution<cit.>. Unsupervised Domain Adaptation (UDA) has demonstrated considerable efficacy by leveraging knowledge from labeled source domain data to facilitate segmentation on unlabeled target domain<cit.>. Moreover, given the constraints posed by privacy and security concerns, the unavailability of source domain necessitates extensive exploration of Source-Free Domain Adaptation (SFDA) techniques in medical image segmentation<cit.>. Nonetheless, owing to the unforeseeable domain discrepancies, both UDA and SFDA face challenges in achieving satisfactory results. Recently, a few works<cit.> have confirmed that a small amount of labeled images in the target domain can significantly improve the model's generalizability in the Semi-supervised Domain Adaptation (SSDA) scenario. Despite its performance, SSDA still requires a considerable amount of annotations for DA and still needs to access the source domain. In addition, SSDA overlooks the strategic selection of annotated samples and uses random sample selection with a given annotation budget, which may not select the most valuable images for annotation, leading to sub-optimal performance. In this work, we explore using active learning strategies for effectively selecting valuable samples for annotation<cit.>, which is promising to further reduce the annotation cost, leading to active SFDA (ASFDA). Presently, there is widespread exploration of active sample selection methods grounded in uncertainty-guided approaches<cit.>, feature space diversity<cit.>, and their amalgamation<cit.>. However, due to the complex and dense nature of inherent predictions, along with domain gaps leading to unreliable model features or predictions, conventional active learning methods are unsuitable for ASFDA scenarios. Moreover, as active samples are commonly assumed to harbor the most informative and representative data, they ideally should play a dominant role in the training process. However, this aspect has been neglected by current methods<cit.>. To mitigate the aforementioned limitations, we propose a practical active learning method Uncertainty-guided Tiered Self-training (UGTST), tailored for ASFDA scenarios in medical image segmentation. In contrast to traditional active learning methods, which often require multiple rounds and utilize only annotated active samples, our approach involves just one round of inference by the source model on the target domain and utilizes unlabeled data in adaptation. We proposed a novel entropy-based slice-level uncertainty estimation method termed global aleatoric uncertainty aggregation and incorporated a diversity-aware redundancy filter for the active sample selection. In response to active samples being undervalued, we developed a Tiered Self-training (TST) DA strategy, by obtaining assumed stable sets to cooperate with active sample dominated DA. The contributions of this work can be summarized as follows: (1)We present a novel and efficient ASFDA framework called UGTST for prostate segmentation tasks, aiming to improve target domain generalizability through efficient annotation efforts manageable in clinical practice. (2)A global uncertainty estimation method for active sample selection in medical image segmentation is designed, along with a diversity-aware redundancy filter to achieve stable and efficient active sample selection. (3)We proposed a practical DA strategy TST for ASFDA, ensuring dominant learning of active samples while progressively utilizing pseudo-labels of unlabeled images. Our method has achieved better performance on the prostate segmentation task than existing ASFDA approaches and was comparable to fully supervised learning with 5% annotation costs. § METHOD We consider a scenario where a segmentation model trained on a source domain dataset is deployed to a target domain dataset D_t = {(x_i^t)}_i=1^N_t, where D_t is unlabeled at the beginning. The objective of ASFDA is, under a controllable small labeling budget M (M ≪ N_t), to select a labeled subset of samples D_at = {(x_i^t, y_i^t)}_i=1^M for one round, and utilize D_t to adjust the pre-trained source model 𝐌_s to achieve good dense predictions on the target domain. Our proposed UGTST is depicted in Fig.<ref>, for the active sample selection stage, given a labeling budget M, we partition the target domain set D_t into an uncertainty candidate set D_tu and an assumed stable set D_ts based on global aleatoric uncertainty aggregation in entropy map. To ensure the diversity of active samples, we further select the active sample set D_ta from the uncertainty candidate set D_tu through diversity-aware redundancy filtering. Then, a tiered self-training strategy was employed for adaptation. §.§ Active Sample Selection via Uncertainty and Diversity To highlight the most valuable and informative samples, the entropy-based uncertainty estimation method is a common approach in active learning<cit.>. However, the source model's limited generalizability leads to highly confident yet unstable predictions on the target domain, making direct computation of entropy maps unreliable. To address this, we adopt a test-time augmentation approach, combining predictions with perturbations from diverse augmentation<cit.>, to diminish confidence in unstable regions and yield more stable predictions<cit.>. For x_t^i ∈ D_t, we design intensity augmentation ℐ and spatial augmentation 𝒯, with K-times random perturbation, the ensemble segmentation result of x_t^i is: p̂^i = 1/K∑_k=1^K(𝒯_k^-1∘𝐌_S(ℐ_k(𝒯_k ∘ x_t^i))) where ℐ_k, 𝒯_k is k-th random intensity and spatial transformation and 𝒯_k^-1 is the corresponding inverse spatial transformation. And for p̂^i ∈ℝ^C × H × W of x_t^i, the entropy map H(p̂^i) ∈ℝ^H × W is calculated as: H(p̂^i) = -∑_c=1^Cp̂^i(c) log(p̂^i(c)) Global Aleatoric Uncertainty Aggregation. As mentioned earlier, the entropy map H(p̂^i) cannot be directly used for active sample selection. Due to the imbalance between foreground and background, taking an average of pixel-level uncertainty across the image will be biased to the background. To identify the uncertain region, we design an adaptive threshold to exclude this portion from the output, aiming to aggregate pixels to obtain an unbiased global uncertainty estimation. Hence, we introduce a novel slice-wise uncertainty estimation method called Global Aleatoric Uncertainty Aggregation (GAUA) specifically tailored for medical image segmentation tasks. The discrete density distribution h^i[n] ∈H̅(p̂^i)_n=1^ℕ is obtained by partitioning the data into bins of size ℕ= 100, arranged from small to large, we can compute the primary local peak value T_i of x_t^i using the discrete difference method, as the self-adaptive threshold to aggregate pixels with relatively high entropy: T_i = *min{h^i[n] | h^i[n]∈H̅(p̂^i)_n=1^ℕ,|Δ h^i[n]|<δ,Δ^2h^i[n]<0} where Δ h^i[n] is the first-order discrete difference of h^i[n], Δ^2h^i[n] is the second-order one. δ is a small adaptive bias for approximation. Then, we compute the mean on pixels with relatively high entropy as the GAUA uncertainty U_i for x_t^i: U_i=∑_n=1^Nh^i[n] ·𝕀(h^i[n]>T_i)/∑_n=1^N 𝕀 (h^i[n]>T_i) where 𝕀 is the indicator function. Then, we divide D_t into two parts: D_tu = {x_t^i | x_t^i∈ D_t,U_i≥U^N_tu_i}; D_ts = D_t ∖ D_tu where U^N_tu_i is the N_tu-th largest value in U_i corresponding to D_t, the capacity N_tu of D_tu is a hyper-parameter for balancing uncertainty and diversity. Diversity-aware Redundancy Filtering. In the uncertainty candidate set D_tu, neighboring slices often have similarly high uncertainties. Labeling them would inevitably introduce redundancy, leading to wasted annotation. To deal with this, we take the feature representation f̅_x_t^i of slice x_t^i from the encoder of 𝐌_S, and we use K-means++<cit.> to cluster D_tu into M clusters, which M is the annotation budget, and select the samples closest to the cluster centroids: D_ta={min_x^tu_i∈ D_tu||f̅_x^tu_i-C_k||^2;k=1,2,...,M} where C_k is the centroid of the k-th cluster. ||·||^2 is the Euclidean distance. f̅_x^tu_i is the feature representation of x^tu_i. Then, annotators are requested to provide manual annotations for selected samples, leading to an annotated subset D_ta={(x^ta_i,y^ta_i)}_i=1^M. §.§ Tiered Self-training for Adaptation To mitigate the impact of noisy pseudo-labels on active sample learning and make active samples dominant in training, we propose a Tiered Self-training(TST) strategy. We first train a stage-1 model 𝐌_t1 initialized with parameters from 𝐌_S on D_ta∪ D_ts, where D_ta with labeled samples, D_ts with pseudo labels. Then, using the trained 𝐌_t1, we regenerate pseudo-labels for the unlabeled subset of target domain dataset D_t∖D_ta with the same strategy in Eq.<ref>. Subsequently, we train a stage-2 model 𝐌_t2 on D_t, progressively achieving domain adaptation across samples with varying degrees of stability. The average of Dice loss and Cross-Entropy loss is used for self-training. § EXPERIMENT AND RESULTS §.§ Experimental Details Dataset. To demonstrate the effectiveness of our UGTST method, we employ publicly available prostate T2-weighted MRI images from various clinical centers to evaluate cross-center DA. We select 60 MRI samples comprising a total of 1544 slices from the NCI-ISBI 2013 dataset<cit.> as the source domain. Additionally, we choose a total of 512 slices from 12 MRI samples acquired from Beth Israel Deaconess Medical Center (BIDMC) and a total of 288 slices from 12 MRI samples obtained from Haokland University Hospital (HK) as two target domains from the PROMISE 12 dataset<cit.>. In the preprocessing stage, we resized all samples to 384×384 in the axial plane and applied min-max normalization to the volume, following previous studies<cit.>. Data from each site were divided into four folds at the case level for cross-validation. We only open the labels of the training set in the target domain during the active sample selection stage, simulating the annotation in clinical practice with a labeling budget of 5%. Implementation Details. We tackled the challenge of large inter-slice spacing by employing slice-by-slice segmentation with 2D CNNs, followed by stacking the results into a 3D volume. Our approach utilizes the widely adopted classic 2D U-Net segmentation network<cit.>, with its encoder and decoder serving as the feature extractor and prediction head, respectively. Experiments were conducted using PyTorch on an NVIDIA RTX 2080Ti GPU. For the source model, we trained a segmentation network on annotated source data with a batch size of 24 for 400 epochs, using SGD optimization with an initial learning rate of 0.01 and polynomial decay with a power of 0.9. During the adaptation phase, training was conducted for 100 epochs with a batch size of 24, using the same SGD with an initial learning rate of 0.001. We used data augmentation including random spatial transformations (flips and rotations) and intensity transformations (gamma correction, contrast enhancement, Gaussian blur and noise) during training. Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD_95) were used as quantitative evaluation metrics in 3D volumes. §.§ Comparison with State-of-the-art Methods. Firstly, we investigated the performance of three state-of-the-art SFDA methods: 1)DPL<cit.>, 2)FSM<cit.>, and 3)UPL<cit.>. Next, our method was compared with five other sample selection methods for annotation with the same budget: 1)Random: randomly select the samples, 2)CTC<cit.>: select the samples closest to the cluster centers, 3)LC<cit.>: samples with smallest probability, 4)Coreset<cit.> samples selected by a set-cover problem and 5)SALAD<cit.>: an ASFDA method employing active learning strategy and guided attention transfer network. These methods were also compared: 1)Source only: The pre-trained source model, serving as the lower bound. 2)Target only: The model was trained solely with annotated images from the target domain. 3)Fine-tune: finetuning the source model with full annotations of the target dataset, serving as the upper bound. For a fair comparison, all methods utilized the same backbone architecture<cit.> with post-processing by retaining the largest connected component in a 3D volume. The quantitative results based on 4-fold cross-validation of adaptation in two target domains are shown in Table <ref>. “Source only” and “Target only” achieved an average Dice of 45.08% and 80.59%, respectively in BIDMC domain and 42.00% and 81.21% for HK domain. In observation, SFDA methods demonstrate an enhancement in performance compared to the “Source only”, FSM<cit.> and UPL<cit.> respectively achieved results of 72.17% and 73.59% average DSC as the best SFDA method. However, there still exists a considerable gap from the upper bound, underscoring the necessity of ASFDA. In the ASFDA with 5% labeled data, Random selection achieved an average DSC of 65.14% and 60.97%, the corresponding values for the best existing method were 73.68% and 72.12%, respectively. Our method achieved DSC of 83.46% and 81.17%, significantly improving performance, and achieved comparable results with an upper bound with “Fine-tune”. Fig.<ref> shows qualitative results between different methods in both two target domains. In the central region where the prostate boundary is prominent, most methods show considerable improvement than “Source only”. However, due to the effective integration of uncertainty and diversity, only our approach achieves high-accuracy segmentation of the prostate region in areas where the boundary is less distinct like the apex and base of the prostate. §.§ Ablation Study. To further investigate each component's contribution, we conducted ablation and sensitivity study on the first fold. The capacity N_tu of the uncertainty candidate set during active sample selection is a hyper-parameter of our method. We set it to M, 2M, 4M, and 8M to investigate how it affects the performance, where M = 5%, in Fig.<ref>(a). The results from one fold of cross-validation in both domains show that 4M is the best hyper-parameter to trade off performance and computational overhead. Further, in Fig.<ref>(b), we also validate the effectiveness of our GAUA compared to other uncertainty estimation methods, including random, MC-dropout<cit.>, Least Confidence(LC)<cit.> and highest entropy(Entropy)<cit.> followed typical practice of averaging the uncertainty across all the pixels to obtain image-level uncertainty. Our GAUA has achieved the highest performance, and all the methods' performance have been buffed from TST. To demonstrate the necessity of utilizing the source model in the adaptation stage, we employed a few semi-supervised learning methods<cit.>, using 5% annotated data selected by our active sample selection technique for Semi-supervised Learning (SSL) in the HK domain under different training epochs. The results are presented in Fig.<ref>(c). The performance of both stages of UGTST surpasses the existing SSL methods, demonstrating the priority and efficiency of DA in ASFDA. Next, we further validated the contribution of each component of our method in the domain BIDMC. The baseline involved using the source model’s predictions as pseudo-labels for adaptation. “Augmentation” means using the ensemble prediction as the pseudo-label to apply self-training process without annotation. When not using TST, we directly merge active samples with labels and unlabeled samples with pseudo labels for self-training. Experimental results shown in Table <ref> show marked performance improvements by each component of UGTST, further confirming the effectiveness of our approach. § CONCLUSION This work presented an ASFDA framework for accurate prostate segmentation. In the absence of source domain data, active samples are selected by relying on only one round of predictions from a pre-trained source model on the target domain. We present a novel uncertainty-based active sample selection method in medical image segmentation tasks. It utilizes entropy-based primary local peak filtering to aggregate global uncertainty, along with diversity-aware redundancy filters, thus selecting both informative and representative samples for annotation. Then we designed the tiered self-training DA strategy, stabilizing the active learning while progressively leveraging pseudo labels. Our experimental results show that our method achieves comparable performance to fully supervised training with an annotation budget of 5%, which is manageable in clinical practice. §.§.§ This work was supported by the National Natural Science Foundation of China under grant 62271115. §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2407.02080v1
20240702091710
Flow and clogging behavior of a mixture of particles in a silo
[ "Sukhada C. Bhure", "Pankaj Doshi", "Ashish V. Orpe" ]
cond-mat.soft
[ "cond-mat.soft" ]
APS/123-QED CSIR-National Chemical Laboratory, Pune 411008 India Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201002 India pankaj.doshi@pfizer.com Pfizer Research and Development, Pfizer Products India Private Limited, Mumbai 400051, India av.orpe@ncl.res.in CSIR-National Chemical Laboratory, Pune 411008 India Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201002 India § ABSTRACT We investigated the clogging behavior observed during the flow of aspherical particles from a silo in the presence of spherical particles of different sizes and proportions using flow visualization experiments and discrete element method (DEM) simulations. The size of the avalanche, essentially the tendency of clogging, exhibits non-monotonic dependence on the spherical particle volume fraction. For small enough content of spherical particles, the clogging tendency intensifies, whereas it reduces rapidly for high enough spherical particle fractions, with a minimum in between. The non-monotonic behavior is observed to persist over for different spherical particle sizes. The overall behavior is shown to arise due to competing effects between the localized total particle fraction influencing avalanche strength and mean size of the particles exiting the silo, influencing the probability of arch formation. Flow and clogging behavior of a mixture of particles in a silo Ashish V. Orpe July 8, 2024 ================================================================ § INTRODUCTION The phenomena of clogging during the outflow of dry granular material from a silo, while interesting in itself, has also resulted in throwing up another interesting phenomena of unclogging of the clogged silo. While the former has been studied over several years <cit.>, the latter has garnered attention in recent times. It has been shown that the silo unclogging may be forced through air jet impinging or silo vibration <cit.>, by having multiple exit orifices <cit.>, or by placing inserts at suitable locations inside the silo <cit.>. These forcings can either break the stable arch or reduce the probability of arch formation in the first place, which is primarily responsible for clogging. Interestingly, the clogging-unclogging phenomena have also been extended to natural systems like movement of pedestrians or animals through a narrow exit <cit.> or artificial systems like the flow of colloidal particles through an orifice <cit.>. Recently, it has been shown that the flow in a (2-dimensional) silo can be enhanced and clogging tendency be reduced due to presence of other (secondary) particles, which are smaller than the bulk particles, but are present at volume fractions as high as 0.2 - 0.4 <cit.>. The silo was operated in the vibration mode, presumably to well mix two different sized particles. The probability of clog formation was found to be reduced due to the presence of secondary particles which led to the formation of an arch easily breakable due to vibration <cit.>. The presence of these particles also showed an increase in the overall flow rate with an optimal dependence on the particle size <cit.>. Our interest lies in understanding the effect of such secondary, smaller spherical particles on the clogging/flow behavior in the silo, but in small amounts, akin to the presence of trace impurities in the flow. In practice, the impurities are bound to be present in the powder material, the effect of which on the flow or clogging is invaluable. Furthermore, we intend to study the flow and clogging behavior of aspherical particles, encountered mostly in practical situations. For simplicity, we consider the secondary particles to be spherical in shape. Apart from practical considerations, the study of the flow of non-spherical particles is fundamentally interesting in its own way. They are known to exhibit clogging characteristics differing from those observed for spherical particles. For instance, the breakdown of the exponential trail in avalanche distributions <cit.>, higher probability to form clogs due to possibility of multiple contact points and approach to spherical particle behavior with increased vertices in a polygon <cit.> are some of the peculiar observations related to non-spherical particle shape. In this work, we focus on understanding the clogging and flow behavior of cylinder-shaped particles in the presence of small amounts of spherical particles in a 3-dimensional silo. We carry out the required study using flow visualization experiments and discrete element method (DEM) simulations. In Sec.II, we describe the experimental system and simulation details followed by results focusing on the avalanche size behavior and relevant characteristics of the system. Toward the end, we provide quantitative reasoning to explain the observed phenomena. § METHODOLOGY §.§ Experimental details Experiments are performed in a silo with dimensions as shown in Fig. <ref>. The side and bottom walls of the silo are made out of acrylic plates glued to each other, while the top is kept open to pour the granular material. The material outflows from an exit slit of fixed width. Two types of particles are used in the experiment. The bulk particles comprise of cylinders made from poly-methyl methacrylate (PMMA), of density 1.2g/cc, with an elliptical cross-section. The length of each cylinder is 3.1mm, while the major and minor diameters are of length 3.2mm and 2.2mm, respectively. The diameter (d_c) of an equivalent sphere volume is, then, calculated as 3.2mm. Glass beads, of density 2.5g/cc and three different diameters (d_s), viz., 0.7, 1 and 2mm are used as secondary spherical particles resulting in particle size ratios (r = d_c/d_s) of 4.6, 3.2 and 1.6 respectively. Images of these particles are shown in Figs. <ref>(b)-<ref>(e). For each experiment, the silo was filled with a mixture of bulk and one of the spherical particles up to a height of 125 d_c, while keeping the exit slit closed. The mixture was prepared by manual mixing of bulk cylindrical particles with predefined spherical particle volume fraction (ϕ) varied in the range 0-0.1. The mixture was, then, poured in the silo by employing distributed filling method. The outflow from the silo was initiated by opening the exit orifice, and the flowing material was collected directly on a weighing scale. Given the size of the orifice with respect to the bulk particle size, the orifice clogged after flowing for some time. The total mass, comprising both the particles, collected on the weighing scale till the occurrence of clogging was termed as the avalanche size (S). After a wait time of 5s, the flow was reinitiated by piercing the exit arch with a pointed object to trigger another avalanche. The procedure was repeated till the fill height reduced to 50d_c, following which the remainder of the material was emptied out, and the silo refilled back to a height of 125d_c to restart the experiments. For every spherical particle concentration and size employed, the silo was filled about five times resulting in at least 500 independent avalanche events. The above procedure was repeated for different volume fractions and sizes of spherical beads. §.§ DEM simulations The discrete element method (DEM) simulations were carried out to investigate the clogging of aspherical particles in the presence of secondary spherical particles. The simulations were carried out using an open source program, LIGGGHTS (LAMMPS Improved for General, Granular and Granular Heat Transfer Simulations), for different sphere volume fractions. The non-spherical particle (cylinder with elliptical cross section) was created using the in-built “multisphere” routine of the LIGGGHTS software. The approach involves clumping together predefined number of spheres of specific size. The resulting non-sphericity of the particle is dependent on the number of clumped spheres, their location with respect to each other and the degree of overlap. Over here, we considered 50 spheres to generate a cylinder with elliptical cross section, while maintaining the relative magnitudes of the length, minor diameter, and major diameter the same as for the experimental particle. As seen from Figs. <ref>(b) and <ref>(d), the shape of the particle generated in simulations seems to be reasonably close to the experimental particle [see Fig. <ref>(b)], barring the sharp corners. A further increase in the number of spheres allowed for replicating the corners in a better manner, but required much longer simulation time. However, this did not alter the flow behavior significantly and, hence, was not considered. The secondary spherical particle was simply modeled as a sphere of prescribed size. The silo of smaller size, viz. height 65d_c, width 30d_c, and depth 10d_c, was used in simulation. Reduced dimensions of the silo were used to reduce the simulation time while ensuring the absence of wall effects. Moreover, the number and the size ratio of aspherical particles to spherical particles was kept the same as in experiments for a given sphere volume fraction. Figures <ref>(e) and <ref>(f) show the front view of the silo, respectively for experiments and simulations, comprising a mixture of particles (ϕ = 0.05) for a particle size ratio r = 3.2. The simulation employs Hertzian contact model for calculation of force between two contacting particles. The Hertzian contact model was used in this work due to its ability to better capture the realistic behavior of granular materials, given the dependence of the interaction force on the overlapping area instead of overlapping distance considered in Hookean contact models <cit.>. The Hertzian model, thus, results in a more realistic force evolution and has been used previously quite frequently <cit.>. The contact force comprises of normal (F_n) and tangential (F_t) components, each of which includes two terms given as F_n = (k_nδn - γ_nv_n/2), F_t = - (k_tΔs_t + γ_tv_t/2), where n is the unit vector along the line connecting centers of two particles, v_t and v_n are, respectively, the tangential and normal components of particle velocities. Both, the normal elastic constant (k_n) and tangential elastic constant (k_t) are chosen to be of the order of 10^7 mg/d_α. The values of the normal damping term (γ_n) and tangential damping term (γ_t) are chosen to be of the order of 10^2√(g/d_α). Here, d_α represents either d_c for cylindrical particles or d_s for spherical particles, and g represents gravity acting in downward direction. The value Δs_t is the tangential displacement between two particles to satisfy the Coulomb yield criterion given by F_t = μ_sF_n, where μ_s is the coefficient of static friction coefficient. The density ratio between cylindrical bulk particles and spherical particles was maintained the same as in experiments. The integration time step used in the simulation is 10^-4. The silo was filled with mixture up to a height of 65d_c, and the flow in simulations was initiated by opening the exit slit. The total mass of particles flowing out of the silo before the orifice clogged was termed as the avalanche size (S). The flow was reinitiated by removing a few particles in the arch. This procedure was repeated till the fill height reduced to 40d_c. This resulted in about 70 independent avalanche events. However, unlike that in experiments, the silo was not refilled to record more number of avalanche events as that became computationally prohibitive. However, as shown later, about 70 recorded avalanche events per case seems reasonably large enough to capture the observed experimental phenomena. §.§ Parameter calibration The calibration of the simulation parameters to match the experiments can be an uphill task given the range of parameters used and two types of particles and materials employed. For simplicity, except friction coefficients, all the remaining contact model parameters are of the same order of magnitude, typically as used previously <cit.>, but for glass beads <cit.> which has a modulus about an order of magnitude greater than PMMA. Note that these values of the contact model parameters are substantially lower than those relevant to real glass and are chosen so as to reduce the overall computational effort <cit.>. In addition to the coefficient of static friction (μ_s), we have also employed coefficient of rolling friction (μ_r) between the particles. The latter represents the ease with which the particles roll past one another and will be determined by the asphericity of the particles in contact. Typically, for the sphere-sphere contact, the value can be expected to be quite low, while it can be high for the cylinder-cylinder contact. The values of both the friction coefficients were adjusted so as to match the value of static angle of repose measured from simulations to that measured from experiments. The material, either cylinders or spheres, were slowly poured in a rectangular cell to form a static heap. The length, height, and depth of the experimental cell was 150d_c, 65d_c, and 18d_c, respectively, while that of simulation cell was 30d_c, 15d_c, and 10d_c, respectively. The cell dimensions were sufficiently large to prevent any kind of end effects. In experiments, the static image of the heap was captured using a digital camera positioned sideways and orthogonal to the sidewall of the rectangular cell [see Figs. <ref>(a) and <ref>(c)]. In simulations, the angle was measured from the final static position of the particles exported as an image [see Figs. <ref>(b) and <ref>(d)]. In each image, a central, nearly flat free surface region of the heap (about 15d) was analyzed to obtain the angle of repose. Every experimental measurement was repeated about six times to ensure consistency. The values of friction coefficients reported in Table <ref>, averaged over all spherical particle sizes, correspond to the scenario wherein the angle measured for an experimentally prepared pile varies within half a degree from that measured in a pile from simulations. This close agreement suggests that the finalized simulation parameters seem reasonable enough to qualitatively reproduce experimental observations and were used in carrying out simulations of particles flowing out of silo. § RESULTS & DISCUSSION In the following, we first provide a qualitative understanding from the observations. This is then followed by a quantitative discussion of the clogging behavior in terms of avalanche size variations as observed in experiments and compared with those observed in simulations. Toward the end, we discuss about certain specific measurements from the simulation data, which have been used to explain the experimental observations. Figure <ref> shows images acquired in experiments during an avalanche at different times and for three different volume fractions (ϕ) of spherical particles of diameter d_s = 1 mm, i.e., r = 3.2. The images show particle configurations at different stages of an avalanche in the vicinity of the exit orifice near the front wall of the silo. The video representation of these images for different sphere volume fractions and sizes is available as a supplementary material. In each case, as seen from the respective videos, the overall motion appears intermittent, eventually leading to an arch formation and clogging. The number of small spheres increase with increasing sphere concentration, though the distribution is not uniform. The spheres, appearing as distinct and small clusters, seem to fill in the voids created by the packing of cylinders. The number of clusters increase with increasing packing fraction, though the size of the cluster remains nearly the same. This is suggestive of the upper limit for the void space available for filling due to smaller spherical particles. The formation of the arch and its shape does not seem to depend on the spherical particle fraction, underlining the random nature of the event in all cases. Near identical, qualitative behavior is also observed for other spherical particle fractions and sizes. The simulation counterpart for the earlier discussion on experimental observations is shown in fig. <ref>. The images are acquired from three different view positions to understand the 3-dimensional nature of the flow and clogging. Unlike in fig. <ref>, the images in fig. <ref> are only shown for the final clogged state. The flow of the avalanches leading to clogging, however, can be seen from the videos available as the supplementary material for different particle fractions and sizes. The main qualitative features seen in experiments, i.e., increased number of smaller spheres with increased fractions, formation of clusters are observed in simulations too. There is, however, a hint of the occurrence of segregation near the bottom, not seen clearly in experiments. The corresponding view from the sidewall (second panel in the figure) shows similar configuration of spherical particles and cylinders showing that the behavior is not a localized, but rather a bulk phenomenon. The perspective view in third panel in Fig. <ref> shows the complicated nature of the arch across the depth of the silo. The occurrence of the flow even when the particles near the front wall are stationary is a direct consequence of this 3-dimensional nature of the arch formation. While the images in figs. <ref> and <ref> and corresponding videos show the behavior for one spherical particle size, a similar qualitative behavior is also observed for other spherical particle sizes. From all these observations, it can be anticipated that the presence of spherical particles in the voids, individually or in clusters, may influence the flowability of the system and arch forming tendency. However, these can be quantitatively ascertained by measurement of the mean and distribution of the avalanche sizes and their dependence on spherical particle size and fractions as discussed next. The avalanche size represents the amount of material flowing out of silo till the orifice is clogged. The avalanche size (S) in experiments is the total mass of cylindrical and spherical particles collected during the flow before the occurrence of clog. The average value (⟨ S ⟩) is obtained over 500 independent flow (or clogging) events. The variation of normalized average avalanche size (⟨ S ⟩/⟨ S_0⟩) with the spherical particle concentration (ϕ) is shown in fig. <ref>(a). Here, S_0 represents the avalanche size for the base case, i.e., in the absence of spherical particles (ϕ = 0.0). Several interesting features are evident from this figure, which we dwell upon next. The magnitude of the avalanche size is governed by the ability of the flowing particles to form a stable arch. The increased avalanche size represents longer flow duration before clogging takes place, indicating a lesser tendency to form a stable arch and vice versa for decreased avalanche sizes. In the limit of infinite avalanche size (when flow never stops), the tendency of arch formation will be negligible, and, in the limit of no flow or immediate clogging, the tendency will be quite high. It may be intuitively expected that the presence of small spherical particles may lubricate the flow of cylindrical particles and/or may reduce direct contacts between cylinders thereby reducing the tendency of arch formation, resulting in increased avalanche size. However, as seen from Fig. <ref>(a), the observed behavior is exactly the opposite. The presence of small spherical particles actually reduces the avalanche size compared to the base case, essentially aiding the clogging behavior. The decrease in the avalanche size is, however, observed over a limited range of sphere volume fraction, leading to a minimum in the avalanche size at an intermediate sphere volume fraction, which is dependent on the size of spherical particle. Any further addition of spherical particles increases the avalanche size, i.e., inhibits clogging tendency. For higher values of ϕ, the avalanche size seems to increase rapidly, similar to that observed previously <cit.> This can be expected as, with increasing volume fraction of spherical particles in the system, the proportion of these particles within the total number of particles exiting the silo will also increase. Given much larger orifice size relative to the size of spherical particles, they cannot be expected to form an arch. Moreover, their presence in large numbers will also reduce the mean size of particles exiting the silo, thereby reducing the tendency of forming an arch, the result being a larger avalanche size. Indeed, the avalanche size will diverge at high enough spherical particle concentration approaching ϕ = 1.0, wherein there will be primarily spherical particles in the outflow thereby precluding arch formation and hence no clog formation. While the overall (non-monotonic) behavior remains same for different sizes of spherical particles, certain deviations exists, which are not quite systematic and hence difficult to understand at this time. For instance, the minimum avalanche size spreads over a range of spherical particle fractions for smallest size spherical particle (r = 4.6) [(see Fig. <ref>(a)], while the spread is limited to a narrow range of fractions for the other two size ratios. The equivalent data as acquired from DEM simulations is shown in Fig. <ref>(b) for the same three particle size ratios as in experiments. While the data for r = 3.2 and r = 1.6 is nearly the same as that obtained in experiments, qualitative differences are seen for the case of smallest sphere of r = 4.6 compared with experiments. The more sustained minimum observed in experiments is not seen in simulations (see Fig. S1 provided in the supplementary material for better comparative representations). This disparity in the observations for different spherical particle sizes, in experiments as well as simulations, can be possibly attributed to the lack of exact replication of (i) the shape of cylindrical particle in simulations, particularly the sharp edges and (ii) actual inter-particle interaction. This inadequacy seems to have a different effect with respect to spherical particle concentration and size, the origins of which are not clear at the moment. However, more importantly, the non-monotonic variation of normalized avalanche size with spherical particle volume fraction is exhibited for all the three sphere sizes, suggestive that the physics behind the experimental observations is captured quite well in simulations. In that case, the more detailed and the three dimensional simulation data can, then, be used for explanation of the observations as discussed later. The distributions of avalanches exhibit exponential behavior for all the cases as shown in Fig. <ref>. The exponential decay represents random nature of discrete avalanche events as has been shown previously in previous studies <cit.>. This suggests that the presence of spheres does not influence the inherent random nature of the clogging phenomena. However, the length scale of the exponential decay is different for experiments and simulations. Near-similar qualitative behavior is obtained for remaining sphere volume fractions as well as for different spherical particle sizes. As discussed earlier, the increase or decrease in the avalanche size will be governed by the probability of clogging occurrence, which in turn will depend on probability of arch formation subject to the local flow conditions prevailing near the silo exit. In a recent work <cit.>, it was shown that the clogging occurrences reduced monotonically with increased translational kinetic energy in the system, which was increased by increasing the driving (gravity) force and increasing the orifice width. The authors assumed that for small enough translational kinetic energy, i.e., slow enough flow, the formed arch is able to resists its breakage till the flow eventually stops. The exact reverse happens for fast flows, wherein the arch is unable to prevent its breakage, thereby reducing the chances of clogging. We borrow the same argument over here to explain our observations. However, the possible drivers for altering the kinetic energy over here are the spherical particle concentration and size ratio, while maintaining the gravitational force and orifice width constant. Under these circumstances, the only possible way for increase or decrease in the kinetic energy is the variation of the packing fraction in the system. We have calculated the packing fraction during the flow in a box of length 10d_c, width 10d_c, and depth 10d_c, located about 3.5d_c above the exit orifice [shown in Fig. <ref>(f)]. The location of the box represents the region closest to the orifice, which can be expected to exert maximum influence, while also away from the actual arch formation location, which typically ranges in the region up to 3d_c above the orifice. The volume fraction in the box was obtained as an average over all the particles across all avalanches and also over all times within each avalanche. It is to be noted that over the entire avalanche duration, the volume fraction measured within the region varied up to 0.5% of the initial value when the avalanche was initiated. Thus, the average value reflects the packing state during the flow for a specified spherical particle concentration and size ratio, which, we believe, should influence the average kinetic energy of the system. We consider both, the translational component of kinetic energy (k_te), which accounts for the flow speed of all particles, and the rotational component of the kinetic energy (k_re), which predominantly represents the ability of the cylindrical particles, to orient themselves appropriately. The latter quantity can be expected to correlate with the average angle of orientation (θ) of the cylindrical particles, calculated with respect to the vertical (or flow direction). Figure <ref>(a) shows the variation of the average total volume fraction (⟨ϕ_t⟩) and normalized average translational kinetic energy (⟨ k_te⟩ / ⟨ k_te0⟩), normalized average rotational kinetic energy (⟨ k_te⟩ / ⟨ k_te0⟩), and average angle of orientation of cylinders with vertical (⟨θ⟩) with spherical particle volume fraction (ϕ) in the region of interest. Here, k_te0 and k_re0 correspond to the avalanche for the base case, i.e., in the absence of spherical particles (ϕ = 0.0). Overall, the total volume fraction (including spheres as well as cylinders) will vary between the lower limit [⟨ϕ_t⟩ = 0.509, shown as magenta line in Fig. <ref>(a)] in the absence of spherical particles and upper limit [⟨ϕ_t⟩ = 0.59 shown as orange line in Fig. <ref>(a)] in the absence of cylindrical particles (or only spherical particles). This is not surprising given the fact that only spheres are expected to pack efficiently than only cylinders. The average total volume fraction [shown as green squares in Fig.<ref>(a)] increases quickly for the lower values of spherical particle concentration followed by a gradual increase at corresponding higher values. The initial rapid increase in the average total volume fraction (⟨ϕ_t⟩) can be envisioned as the spherical particles filling the available voids between cylinders thereby improving the packed state. This progressive increase in the value of volume fraction is expected to gradually decrease the flow velocity leading to the decrease in the average translational kinetic energy (⟨ k_te⟩ / ⟨ k_te0⟩) as indeed seen in Fig. <ref>(a). Second, the increased volume fraction will impede the ability of the cylindrical particles to rotate and align themselves with the flow direction, thereby reducing the average rotational kinetic energy (⟨ k_re⟩ / ⟨ k_re0⟩) as seen in fig. <ref>(b). The direct consequence seems to be the increase in the average orientation angle (⟨θ⟩) with respect to the vertical (flow) direction [see Fig. <ref>(b)]. The higher the orientation angle of cylinders with respect to vertical, higher would be the resistance to flow, while the minimum resistance can be expected when cylinders align parallel to the flow direction. The combined effect of these three entities (k_te, k_re, and θ) is to make the arch increasingly resistant to the flow, thereby leading to more frequent clogging and, hence, the reduction in avalanche size, in agreement with the previously published work <cit.>. This is observed in Fig. <ref>(b), wherein the decrease in the avalanche size (shown as green curve) coincides with the decrease in both the components of kinetic energy [violet curve in fig. <ref>(a) and blue curve in fig. <ref>(b)] and increase in average orientation angle with respect to vertical [magenta curve in Fig. <ref>(b)]. The minimum avalanche size is obtained for a spherical particle fraction corresponding to minimum in both components of kinetic energy, maximum in the orientation angle as well as the transition between rapid and slow increase in total volume fraction. This transition point represents the changeover from a state of higher clogging occurrences to a state of lower clogging occurrences. Following the transition point, the total volume fraction increases gradually with increase in spherical particle fraction. Given that the total number of voids available are limited and mostly filled up, further increase in ϕ simply adds up the number of spherical particles leading to a gradual change in total volume fraction. In view of the argument in the preceding paragraph, this should lead to further decrease in the value of kinetic energy. On the contrary, both the components of kinetic energy are seen to increase continuously. As already discussed, the relative proportion of small sized spherical particles increases in the material flowing out of the orifice, leading to a progressive decrease in the mean particle diameter (number average of cylindrical and spherical particles) with increasing value of ϕ. The ratio of the orifice width to this mean particle diameter increases progressively, thereby reducing the probability of arch formation, leading to reduced clogging or increased avalanche size and consequently faster flow, and hence larger translational kinetic energy. The faster flow, perhaps, enables the cylinders to rotate more easily and orient themselves with the flow direction, thereby reducing the angle with the vertical and increase in the rotational kinetic energy. The observed non-monotonic dependence of avalanche size on spherical particle fraction is, then, due to competing effects between increased packing influencing the avalanche strength and reduced probability of arch formation with decreased mean particle size in the outflow zone. Nearly similar qualitative behavior is also observed for other two size ratios (not shown). § CONCLUSIONS The flow of cylindrical particles through a 3-dimensional silo is investigated in the presence of spherical particles present in different proportion and of different sizes. Flow visualization experiments and discrete element method (DEM) simulations are employed for this study. The clogging behavior is studied for an exit orifice (or slit) of fixed size and is measured in terms of the size of an avalanche emanating from the silo. The presence of spherical particle exhibits a non-monotonicity in the variation of avalanche size. For small enough spherical particle fraction, the avalanche size decreases, i.e., the clogging tendency increases, which is somewhat non-intuitive in nature and in contrast to previous observations <cit.>. However, for large enough tracer fraction the avalanche size increases rapidly, i.e., the clogging tendency decreases in agreement with previous observations <cit.>. Similar, qualitative, behavior is observed for all the spherical particle sizes used, though with certain quantitative differences arising out of size differences. The non-monotonic behavior of clogging tendency is attributed to two effects arising due to addition of spherical particles, viz. increase in total particle fraction and reduced mean particle size exiting the orifice. For small enough spherical particle fractions, the former effect dominates, leading to reduced kinetic energy and increasingly resistive arch formation. At larger fractions, the latter effect dominates leading to faster flows, increased kinetic energy, and reduced clogging tendency. It is quite interesting to know that such small presence of spheres, which typically may be neglected, can lead to unexpected clogging, not quite expected. The knowledge of the existence of such behavior would be of substantial interest to several industries handling powders in various applications. Fundamentally, the presence of such behavior can spur detailed modeling to understand the flow of bi-disperse granular material through the silo, something which is rarely studied. More interesting would be to study the carryover of this phenomena for particles of different shapes and for cohesive grains encountered in practice. A.V.O gratefully acknowledges the financial support from the Science and Engineering Research Board, India (Grant No. CRG/2019/000423). S.C.B acknowledges the Council of Scientific and Industrial Research (CSIR), India, for the CSIR-GATE fellowship. The support and the resources provided by “PARAM Brahma Facility” under the National Supercomputing Mission, Government of India at the Indian Institute of Science Education and Research (IISER), Pune is gratefully acknowledged. The authors also gratefully acknowledge the computational resources provided by “Einstein cluster facility” at CSIR - National Chemical Laboratory, Pune.
http://arxiv.org/abs/2407.03131v1
20240703141300
MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
[ "Yanjie Cui", "Xiaohong Liu", "Jing Liang", "Yamin Fu" ]
cs.NE
[ "cs.NE", "cs.AI", "eess.SP" ]
Beijing University of Posts and Telecommunications, School of Computer Science Beijing, 100876, China {yanjiecui, xiaohongliu, liangjing18, fuyamin}@bupt.edu.cn MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition Yanjie Cui, Xiaohong Liu^∗*Corresponding author, Jing Liang, Yamin Fu Received XXX; accepted YYY =========================================================================================== § ABSTRACT Electroencephalography (EEG), a medical imaging technique that captures scalp electrical activity of brain structures via electrodes, has been widely used in affective computing. The spatial domain of EEG is rich in affective information. However, few of the existing studies have simultaneously analyzed EEG signals from multiple perspectives of geometric and anatomical structures in spatial domain. In this paper, we propose a multi-view Graph Transformer (MVGT) based on spatial relations, which integrates information from the temporal, frequency and spatial domains, including geometric and anatomical structures, so as to enhance the expressive power of the model comprehensively. We incorporate the spatial information of EEG channels into the model as encoding, thereby improving its ability to perceive the spatial structure of the channels. Meanwhile, experimental results based on publicly available datasets demonstrate that our proposed model outperforms state-of-the-art methods in recent years. In addition, the results also show that the MVGT could extract information from multiple domains and capture inter-channel relationships in EEG emotion recognition tasks effectively. EEG, emotion recognition, graph transformer, structure encoding § INTRODUCTION Affective computing is commonly employed for the analysis of emotional states through Human-Computer Interaction (HCI) systems, which collect multimodal data from subjects, including voice signal, self-report, body gesture and physiological signals. Compared to other modalities, physiological signals have certain advantages. These signals are directly captured from the subjects' mental states, thus prevent subjects from disguising or hiding. The physiological signals commonly used to measure emotions are electroencephalography (EEG), electrocardiography (ECG), electromyography (EMG), and galvanic skin response (GSR), etc., among which EEG is often utilized for analyzing cognitive functions of human brain. Electrical signals from brain neurons are collected using the EEG method, which involves placing dry and noninvasive electrodes on the scalp<cit.>. Nowadays, due to its high temporal resolution, portability, and affordability, this method is widely employed to study brain changes in response to emotional stimuli<cit.>. Traditional EEG features are mainly divided into three kinds, i.e., time domain, frequency domain, and time-frequency domain features. Given the signal-to-noise ratio and substantial fluctuations inherent in EEG signals, frequency domain features are commonly used for EEG-based emotion recognition tasks. The typical approach involves decomposing the raw signals into five frequency bands: δ, θ, α, β, γ. Frequency domain features, such as power spectral density (PSD)<cit.>, differential entropy (DE)<cit.>, differential asymmetry (DASM)<cit.> and rational asymmetry (RASM)<cit.>, are subsequently extracted from each frequency band respectively. The spatial structure of the brain also contains rich emotional information. Emotional states may involve distributed circuits rather than considering a single brain region in isolation<cit.>. Asymmetry between the left and right hemispheres can reflect changes in valence and arousal<cit.>. Recent studies have highlighted the importance of utilizing spatial domain information. Li et al.<cit.> introduced recurrent neural networks to learn the asymmetric differences between the left and right hemispheres. Li et al.<cit.> also utilized hierarchical neural networks to learn both regional and global information of spatial-temporal EEG features. Graph Neural Networks (GNN) are emerging as a powerful tool for analyzing spatial information in EEG emotion recognition. Song et al.<cit.> dynamically learned relationships between EEG channels using graph convolutional networks (GCN). Zhong et al.<cit.> incorporated asymmetry of the left and right hemispheres into the adjacency matrix to model graph structure. Li et al.<cit.> also utilized adaptive graph convolutional networks that integrate multi-domain information to learn relationships between channels. Ding et al.<cit.> incorporated lobe information as prior knowledge into the GNN. Jiang et al.<cit.> proposed an elastic graph Transformer to extract emotional information. Although these methods have achieved excellent performance in emotion recognition tasks, they have a common issue: they all rely on GNNs based on neighborhood aggregation schemes which may pose potential risks such as over-smoothing<cit.>, under-reaching<cit.>, and over-squashing<cit.>. Additionally, these methods do not comprehensively consider the geometric and anatomical structure information of the brain. The main contributions of this paper are as follows: ∙ We propose a multi-view graph transformer based on spatial relations (MVGT), fusing information from multiple perspectives including temporal, frequency, and spatial domains. ∙ Our method, based on Graph Transformer, mitigates the potential risks of over-smoothing, under-reaching and over-squashing occurring in traditional GNNs. Additionally, it enhances the model's expressive power in emotion recognition by introducing spatial structural encoding based on geometric and brain lobe information. ∙ Extensive experiments conducted on public datasets SEED and SEED-IV show the effectiveness of our model in emotion classification tasks. § RELATED WORK In this section, we review the related work from the perspectives of EEG-based emotion recognition and graph transformer. §.§ EEG-based emotion recognition EEG signals are inherently noisy and susceptible to channel crosstalk<cit.>. Due to the complexity of EEG signals, it is challenging to isolate clean and independent signals. Therefore, it is crucial to select what form of data to analyze under conditions of high noise. Effective features of EEG signals can reduce noise and facilitate the recognition of cognitive patterns in specific tasks. Experimental evidence suggests that frequency domain features are often associated with behavioral patterns<cit.>, hence they are commonly used in EEG analysis. Along with the development of deep learning, increasingly complex models with rich expressive abilities have emerged and have been extensively utilized in EEG signal analysis. Zheng et al.<cit.> employed deep belief networks to analyze important frequency domain components and effective channels based on the learned parameters. Song et al.<cit.> used a graph convolutional method based on Chebyshev polynomials<cit.> to dynamically learn the representations of EEG signals. Zhong et al.<cit.> innovatively incorporated the asymmetric information of the hemispheres as prior knowledge into the adjacency matrix in 3D space and used GCN to dynamically learn the inter-channel correlations. The reasonable combination of the multi-domain information contributes to improving the accuracy in the emotion recognition task. Li et al.<cit.> proposed an adaptive graph convolutional network that integrates the temporal domain, frequency domain, and functional connectivity. Ding et al.<cit.>, inspired by neuroscience research, combined intra-region convolution and inter-region convolution based on brain lobe regions to learn brain cognitive patterns. Jiang et al.<cit.> utilized the advantages of GCN in the spatial domain and Transformer in the temporal domain to improve the accuracy of emotion classification. §.§ Graph Transformer The GNNs used in the above methods are based on neighborhood aggregation schemes. However, classical GNNs based on message passing (MPGNNs) may lead to over-smoothing<cit.>, under-reaching<cit.>, and may also fail to fit long-range signals due to over-squashing<cit.>, which limit the expressive power of the model. Graph transformers (GTs) alleviate such effects as they have a global receptive field<cit.>. However, without sufficiently expressive structural and positional encodings, GTs cannot capture effective graph structures<cit.>. Dwivedi et al.<cit.> utilized eigenvectors of graph Laplacian as position encodings in fully connected Graph Transformers and integrated edge features into the attention mechanism. Building on this, SAN<cit.> used a full Laplacian spectrum to learn the positional encodings for each node. Graphormer<cit.> employed node centrality and node distance metric to implement structural and relative positional encodings, achieving state-of-the-art performance on molecular prediction datasets. In EEG emotion recognition, Li et al.<cit.> innovatively combined a masked autoencoder based on self-supervised learning with a CNN-Transformer hybrid structure, effectively improving classification accuracy. However, this method only used sine-cosine positional encodings, limiting the Transformer's ability to learn spatial information. § PRELIMINARY §.§ Graph Neural Network (GNN) Let G = V, E define a graph, where V = v_1, v_2, ⋯, v_n represents the nodes in the graph, and E = e_1, e_2, ⋯, e_m is the edges between the nodes. The representation of node v_i is denoted as x_i ∈^d. Most existing GNNs<cit.> adopt neighborhood aggregation schemes, iteratively aggregating representations of its first or higher-order neighbors, followed by using backpropagation (BP) to learn task-driven feature encodings. We define the representation of node v_i at the l iteration as h_i^(l) and define h_i^(0) = x_i. The l-th iteration can be represented as: a_i^(l) = AGGREGATE^(l)φ_θ (h_j^(l), e_ji^(l)) : j∈𝒩(v_i) h_i^(l) = UPDATE^(l)h_i^(l-1), a_i^(l) where φ_θ represents a differentiable function used for feature transformation of node and edge information. The 𝒩(v_i) is the set of neighbors of v_i. The AGGREGATE function is used to aggregate the transformed representation using a differentiable, permutation invariant function, (such as mean, sum, max, etc.). The goal of UPDATE function is to integrate the information from neighbors into the node representation. For graph classification, the READOUT operation is typically used to obtain a representation of the entire graph, which is then fed into a classifier to determine the graph label. §.§ Graphormer The Transformer<cit.> is undeniably one of the most popular deep neural network architectures today, driving significant advancements in natural language processing and computer vision. With its global receptive field and multi-head attention mechanism, Transformer can extract global semantic correlations between tokens in multiple feature subspaces, effectively enhancing the model's expressive power. From the perspective of GNNs, Transformer can be interpreted as a GNN acting on a fully connected graph. Therefore, it is reasonable and feasible to use Transformer to address tasks on graph data. The ability to properly incorporate the structural information of graphs into the model is the key for leveraging its expressive power. Graphormer<cit.> can go beyond classical MPGNNs in expressive power and achieves state-of-the-art performance on large molecular benchmarks. Graphormer incorporates centrality encoding into the graph data and integrates spatial encoding, edge encoding into the attention mechanism, which can be expressed as: A_ij=h_iW_Qh_jW_K^T/√(d) + b_ϕ(v_i,v_j) + c_ij, where the bias term b_ϕ(v_i,v_j) can adaptively adjust the correlations between v_i and v_j. The c_ij represents the edge encoding on the shortest path. § METHODS In this section, we introduce the methods employed in the EEG emotion recognition task. Firstly, we elaborate on the embedding for temporal information. Secondly, leveraging the spatial geometry and physiological anatomy of the brain, we propose two novel and simple designs of encoding that enable the model to adaptively learn the inter-channel correlations. Finally, we present the detailed implementations of MVGT. §.§ Problem Definition EEG signals can be represented as a two-dimensional matrix with respect to channels and time. Given that channels exhibit spatial structure, they can be structured into fully connected graph data G = V, E, where V denotes the nodes in the graph, representing EEG channels, and E denotes the edges, representing the connections between channels. The features of the nodes are denoted by X = x_1, x_2, ⋯, x_n∈^n × d, where n = | V | represents the number of nodes and d represents the feature dimension. §.§ Temporal Embedding EEG signals have high temporal resolution and contain rich temporal information. Because of the multi-electrode acquisition method, EEG signals can be regarded as multivariate time series. When processing time series, the embedding of temporal information are crucial. EmoGT<cit.> treats the features of different channels at the same time points as tokens and employs an attention mechanism to extract temporal correlations between them. Due to the different anisotropic volume conduction characteristics<cit.> in human brain tissues, there may be temporal delays between different channels, which in turn leads to time-unaligned events at a single moment thus causing performance degradation. MD-AGCN<cit.> utilizes convolutional neural networks to extract temporal information along the time axis from continuous EEG segments, with the receptive field limited by the size of the convolution kernel. Inspired by iTransformer<cit.>, we broaden the receptive field by considering the entire time series as an embedded token rather than a single time point. First, following the methods of MD-AGCN and EmoGT, we use overlapping sliding windows of size T to segment EEG signals along the time axis and use these segments as tokens, which are then fed into the attention module in the form of continuous segments. After processing with sliding windows, we obtain X∈^S × n × Tf, where S denotes the number of continuous EEG segments, n is the number of channels, and f denotes the dimension of frequency domain features. According to the universal approximation theorem<cit.>, the feed-forward neural network (FFN), as the basic module of the Transformer encoder, can learn the intrinsic properties to describe a time series and is a superior predictive representation learner compared to self-attention<cit.>. Therefore, using continuous time segments as the input to the FFN may be more effective in extracting the temporal information of each channel independently. §.§ Spatial Encoding The special structure of the brain encompasses rich spatial information. Fully exploiting structural information is beneficial for the recognition and analysis of cognitive patterns in the brain. Therefore, to better identify emotional patterns in emotion classification tasks, we employed two simple but effective methods of spatial encoding: brain region encoding and geometric structure encoding. §.§.§ Brain Region Encoding Neuroscience research demonstrated that the activation of a specific brain region often leads to the concurrent activation of related brain regions responsible for the same high-level cognition<cit.>. In EEG emotion recognition, incorporating relevant neuroscience findings can typically enhance recognition accuracy. RGNN<cit.> integrates the asymmetry of neural activity between the left and right hemispheres as prior knowledge into the adjacency matrix, effectively enhancing recognition accuracy. BiHDM<cit.> improves emotion pattern recognition performance by learning the differences between the left and right hemispheres. LGGNet<cit.> divides EEG channels into different regions and combines local intra-region convolution with global inter-region convolution, achieving good results on the DEAP<cit.> dataset. With reference to the three divisions of LGGNet, we adopt four brain region divisions, which divide the EEG channels into different regions based on a prior knowledge, aiming to incorporate the brain region information into the model. We divide the regions based on the anatomical structure of the brain and implement LOBE scheme. To further investigate the expressive power of brain region encoding, we conduct a detailed division of brain lobes according to the 10-20 system based on electrode positions, employing the GENERAL scheme. Asymmetric EEG activity in the frontal lobe can be utilized for discriminating valence changes<cit.>. The left frontal lobe exhibits a stronger correlation with joy and happy, while the right frontal lobe is more strongly correlated with fear and sadness. Thus we further divide the frontal lobe region into two symmetrical regions to obtain the FRONTAL scheme. According to the symmetry of brain structure<cit.>, we make a finer division of the brain lobe regions, defining the HEMISPHERE scheme. The four modes mentioned above are showed in Fig. <ref>. In terms of specific implementation, we assign each electrode a brain region tag, then project the tags into an embedding space using a learnable projection function, and simply add the embeddings to the node features. The encoding of node i is represented as follows: r_i = Embedding(Tag(x_i)), r_i ∈^d, h_i^(0) = x_i W_𝒳 + r_i, where W_𝒳∈^Tf × d is a learnable projection function, and d represents the dimension of the embedding. Through the above encoding method, we integrate the information of the brain's physiological anatomy into the model. §.§.§ Geometric Structure Encoding In the real world, the human reasoning process considers not only the semantic relationships between objects but also their spatial relations. EEG channels have a 3D structure, and the functional connectivity between these channels lack precise definitions. Therefore, we represent the relationships between EEG channels as a fully connected directed graph structure. The Euclidean distance between channels is calculated using their coordinates to learn the spatial correlations between nodes. Firstly, let ϕ(i,j) represents the Euclidean distance between node i and node j, and encode ϕ(i,j) using a set of Gaussian basis functions <cit.>. Let b_k ∈^n × n denotes one of the Gaussian basis functions. The element (i,j) of this function can be expressed as: b_k(i,j) = 𝒢_k α_ijϕ(i, j) + β_ij - μ_k, σ_k, where α_ij, β_ij, μ_k, and σ_k are learnable parameters, and i and j denote the index of the source and target node, respectively. The result of the basis functions can be represented as B = ‖_k=1 ^ K b_k ∈^n × n × K, where ‖ denotes the concatenation operation. All spatial encodings of each node are then summed up along the second dimension and transformed linearly to obtain the geometric structure encoding. h_i^(0) = x_i W_𝒳 + z_i W_𝒵 + r_i, z_i = ∑_j=1^n B_i,j,k, where i denotes the node index, and W_𝒵∈^K × d is a learnable projection function. Additionally, we incorporate the spatial encoding as a bias term into the softmax attention, which will help the model properly capture the spatial correlations. Our proposed spatial encoding matrix is directed, which is inconsistent with the assumption of a symmetric adjacency matrix<cit.>. Using directed connections provides the model with greater expressive power because the correlation between node pairs i, j and j, i may differ. Since we assume nodes are fully connected, we avoid specific assumptions about inter-channel correlations and learn the functional correlations between nodes through encoding. Let l denote the model depth, and i denote the index of multi-head attention. Therefore, the brain functional encoding can be represented as: A^l, i = SoftmaxH^lW_𝒬^l, iH^lW_𝒦^l, i^T/√(d^l) + BW_ℬ^i, where W_𝒬^l, i, W_𝒦^l, i and W_ℬ^i are learnable parameters, and d^l denotes the feature dimension size of the l-th layer. This encoding method integrates temporal, frequency, and spatial domain features into the model, enhancing its expressive power. We compute the attention scores between nodes using embedded vectors, representing the semantic correlations between different nodes from multiple perspectives. Finally, the attention scores are added to the spatial geometric encoding to obtain the correlations between channels. §.§ Implementation Details of MVGT In this section, we describe the overall architecture of the model, including spatial encoding and the Transformer encoder, as illustrated in Fig. <ref>. For better optimization, we first apply GraphNorm<cit.> to normalize the input features between 0 and 1. Subsequently, we perform geometric and regional structure encoding to obtain multi-domain embeddings. X' = GraphNorm(X), H^(0) = SpatialEncoding + Proj(X'), We employ a Pre-LN Transformer structure, applying layer normalization (LN) before the multi-head attention (MHA) and the FFN. Recent study suggests that the Pre-LN structure yields more stable gradients and is more favorable for optimizer, enabling faster convergence<cit.> compared to Post-LN. Additionally, we utilize dropout to mitigate overfitting. This process is represented as follows: H'^(l) = MHA(LN(H^(l-1))) + H^(l-1), H^(l) = FFN(LN(H'^(l))) + H'^(l), Inspired by <cit.>, we feed the outputs recursively into the same modules, denoted as recycling in Fig. <ref>. The iterative refinement progressively refines the model's ability to discriminate encoded information and understand emotional patterns, thereby helping the model capture more effective details. § EXPERIMENTS §.§ Datasets For our experiments, we selected the SEED<cit.> and SEED-IV<cit.> datasets to evaluate the effectiveness of our model. These datasets consist of EEG signals recorded from subjects while they watched emotion-eliciting videos. SEED dataset comprises data from 15 subjects who participated in three sessions, each separated by at least one week. Each sessions consists of 15 trials capturing emotional labels, with the emotion labels being positive, negative, and neutral. SEED-IV dataset is constituted by EEG signals from 15 subjects across three separate sessions conducted at different times, using the same device as the SEED dataset. This dataset encompasses four emotion labels: neutral, sad, fear, and happy. In each session, each subject underwent 24 trials. §.§ Settings To prevent potential data leakage that could arise from segment-wise shuffling, we split the training and test sets at the trial level. Following the settings of previous studies<cit.>, we use pre-computed differential entropy (DE) features for the recognition task. For the SEED dataset, we use the first 9 trials of each subject as the training set and the last 6 trials as the test set, as done in previous research. The DE features are computed using five frequency bands extracted from 1s nonoverlapping windows. The model performance is evaluated based on the average accuracy and standard deviation across all subjects over two sessions of EEG data. Similarly, for the SEED-IV dataset, we use the first 16 trials as the training set and the last 8 trials as the test set. The DE features for SEED-IV are calculated using 4s windows. The performance of our model is assessed using data from all three sessions. For input data, we use overlapping sliding windows of size T along the time axis to extract sample fragments, with T being set to 5. During experiments, the hidden dimension is set to 64 and the number of Gaussian basis functions is 32. The number of MHA layers is 4 and the number of attention heads is 2. The iterative refinement process is performed three times. We set the batch size to 32 and the learning rate within the range of 3e-5 to 3e-3. Cross-entropy is used as the loss function, and AdamW<cit.> is employed as the optimizer with a weight decay rate of 0.1. §.§ Baseline Models ∙ DGCNN<cit.>: A dynamic graph neural network method based on Chebyshev polynomials dynamically learns inter-channel relations in emotion recognition. ∙ BiHDM<cit.>: This model employs a pairwise subnetwork to capture the discrepancy between the left and right hemispheres of the brain. ∙ R2G-STNN<cit.>: A model that captures spatial-temporal features from local to global scales for emotion classification. ∙ RGNN<cit.>: A regularized GNN that learns topological relationships between channels. ∙ MD-AGCN<cit.>: An adaptive GNN that comprehensively considers temporal domain, frequency domain, and brain functional connectivity. ∙ MV-SSTMA<cit.>: A multi-view masked autoencoder combining CNN and Transformer for emotion recognition. ∙ EmoGT<cit.>: A elastic graph Transformer network that integrates temporal and spatial information. §.§ Results Analysis We compare the classification results based on the SEED and SEED-IV datasets with recent state-of-the-art models, as shown in Table 1. It is evident that our proposed model significantly outperforms the baseline models under the same experimental settings. In the experiments on the SEED dataset, the model adopting the FRONTAL scheme achieved the best performance, with a classification accuracy of 96.45%. The LOBE scheme also achieved a slightly superior accuracy of 95.36%, compared to other models. For the SEED-IV dataset, the classification accuracy under the GENERAL scheme was 93.57%, reaching the best performance compared to baseline models. The MVGT model also achieved commendable results under other division schemes. Overall, our model achieved the best recognition accuracy compared to the baselines. The results suggest that selecting the specific division scheme relevant to the emotion task could enhance the expressive power of MVGT. Fig. <ref>fig:cf_seed and <ref>fig:cf_seediv illustrate the confusion matrices of MVGT-F on the SEED and MVGT-G on the SEED-IV, respectively. The values represent the recognition accuracy of the model for different emotion classes. For the SEED dataset, our model achieved the highest accuracy in recognizing positive emotions (98.12%), followed by neutral emotions (96.38%), with negative emotions being slightly lower (94.73%). Only 0.33% of positive emotion samples were misclassified as negative, while only 0.76% of negative emotion samples were recognized as positive, indicating the model's effectiveness in distinguishing valence changes. For the SEED-IV dataset, our model performed best in recognizing neutral emotions, with an accuracy of 95.90%, while its performance on happy emotions was slightly lower than the other three emotions, with an accuracy of 90.76%. This could be attributed to the GENERAL scheme setting, making the model more sensitive to balanced emotions. Our model achieved state-of-the-art performance on both the SEED and SEED-IV datasets, primarily due to our comprehensive consideration of frequency, temporal, and spatial geometric information, combined with prior knowledge from neuroscience. The incorporation of relevant brain region schemes into the model significantly contributed to its success. §.§ Ablation Study To validate the effectiveness of spatial encodings, we conducted ablation experiments on the SEED and SEED-IV datasets, as presented in Table <ref>. By removing both types of spatial encoding, we repeated the aforementioned experiments under the same experimental settings. On the SEED dataset, the model achieved an accuracy of 93.79% with a standard deviation of 7.15%. Compared to MVGT-F, the accuracy decreased by 2.66% and the standard deviation increased by 2.75% after removing spatial encodings. For the SEED-IV dataset, the accuracy dropped by 4.08%, resulting in 89.49%, with the standard deviation rising by 1.80% to 10.40%, when compared to MVGT-G. The experiments demonstrate that incorporating spatial structure information benefits the model performance in emotion recognition tasks. Under experimental settings that consider only geometric structure or brain region structure, the model's classification accuracy improved over the plain model without any spatial encoding. Evidently, when considering both types of spatial structures simultaneously, the model performance significantly surpassed that of the plain model and models using only single spatial information. This indicates the effectiveness of our proposed spatial encodings and confirms that the expressive power of the Graph Transformer relies on the spatial structure and positional encoding. §.§ Visualization of Inter-channel relations To better illustrate the correlations between channels, we visualized the inter-channel relations of MVGT-F on the SEED and MVGT-G on the SEED-IV. Given that the inter-channel relations might vary among different subjects, we calculated the average weights across all subjects. We focused on the last iteration of iterative refinement and selected the 10 strongest connections of channel pairs. Fig. <ref> shows the visualization results, where the rows represent the attention heads and the columns represent the layers of the MHA. The parameters based on the SEED dataset indicate that emotion patterns are reflected in the activities of multiple brain regions. In the first layer of MVGT-F, the channels in the left frontal region had higher participation in the first attention head, while the channels in the right frontal region were more involved in the second head, potentially corresponding to positive and negative emotion patterns<cit.>, respectively. In the second layer of the model, the parietal and occipital regions showed higher involvement, which aligns with the findings on emotion patterns in <cit.>. As the model depth increases, the symmetrical connections in the lateral temporal regions of both hemispheres are enhanced, consistent with previous research by <cit.>. For the SEED-IV dataset, the connections in the frontal, parietal, and occipital regions are the most active, consistent with the findings of <cit.>. In the first attention head of MVGT-G, the strongest correlation was between O1 and PO3, followed by P4 and P2. Other connections were mainly distributed in the temporal and frontal regions. In the second head, the channel pairs (O1, PO5), (CB1, PO7), and (PO5, PO7) contributed the most to emotion recognition. Additionally, the connection between AF3 and FP1 provided important information for emotion processing, which aligns with the conclusions of <cit.>. Overall, our model does not focus solely on the local information of a single brain region but comprehensively considers intra-regional and inter-regional information. This confirms that emotional states result from interactions among widely distributed functional networks in the brain, as discussed by <cit.>. § CONCLUSIONS In this paper, we propose a multi-view Graph Transformer based on spatial relations for EEG emotion recognition. This model integrates information from multiple perspectives, including temporal, frequency and spatial domains. We incorporate spatial geometric encoding and brain region encoding to enhance the Graph Transformer's ability to perceive spatial structures. Additionally, the model adaptively learns inter-channel relationships through an attention mechanism and the encoding of channel geometry. Extensive experiments on public emotion recognition datasets demonstrate that our proposed model outperforms other competitive baseline models. Furthermore, analysis of channel correlations indicates that emotional activities in the brain are not confined to a single local region but result from the coordinated action of multiple brain areas. The frontal, parietal, occipital, and lateral temporal lobes all contribute to the emotion recognition tasks in varying degrees. In future work, we will focus on the following aspects: (1) designing more optimal structural encodings, such as data-driven methods for adaptive structural encoding; (2) attempting to combine various handcrafted features and exploring the possibility of extracting effective EEG features through neural networks; (3) investigating emotion recognition methods based on multimodal physiological signals.
http://arxiv.org/abs/2407.02016v1
20240702073820
Integral Representations of Riemann auxiliary function
[ "Juan Arias de Reyna" ]
math.NT
[ "math.NT", "Primary 11M06, Secondary 30D99" ]
Arias de Reyna]J. Arias de Reyna Universidad de Sevilla Facultad de Matemáticas c/Tarfia, sn 41012-Sevilla Spain. [2020]Primary 11M06; Secondary 30D99 arias@us.es, ariasdereyna1947@gmail.com § ABSTRACT We prove that the auxiliary function (s) has the integral representation (s)=-2^s π^se^π i s/4/Γ(s)∫_0^∞ y^s1-e^-π y^2+πω y/1-e^2πω y dy/y, ω=e^π i/4, s>0, valid for σ>0. The function in the integrand 1-e^-π y^2+πω y/1-e^2πω y is entire. Therefore, no residue is added when we move the path of integration. Integral Representations of Riemann auxiliary function. [ July 8, 2024 ======================================================= § INTRODUCTION The auxiliary function of Riemann is defined by the integral (s)=∫_0↙1x^-s e^π i x^2/e^π i x- e^-π i x dx. The position of the zeros of this function is connected with the zeros of the Riemann zeta function <cit.>. In Section <ref> we prove the new integral representation (s)=-2^s π^se^π i s/4/Γ(s)∫_0^∞ y^s1-e^-π y^2+πω y/1-e^2πω y dy/y, ω=e^π i/4, s>0, Its main interest is that it gives (s) as a Mellin transform of an entire functions. Therefore, changing the path of integration in this integral does not add residues. This gives new opportunities to bound (s) without the need to bound a zeta sum. Perhaps useful to prove the Lindelöf hypothesis. In Section <ref> we give a new proof, starting from (<ref>), of the representation integral (s)=ω e^π i s/4sinπ s/2∫_0^+∞y^-se^-π y^2/sinπω y dy, proved in <cit.>. Section <ref> gives a form of (<ref>) for s on the critical line: (12+it)=(1+ie^-π t)e^-2iϑ(t)∫_0^∞e^itlog x/√(x)e^-π i/2(x^2+x)sin(π/2(x^2-x))/sin(π x) dx, where the improper integral is convergent. § FIRST EXPRESSION OF R(S) We start from an integral representation given by Gabcke <cit.>. Namely, (s)=-2^s π^s/2e^π i s/4∫_-∞^∞e^-π x^2H_-s(x√(π))/1+e^-2πω x dx, where ω=e^π i/4 and H_ν(s) denotes the Hermite function as defined in the book by Lebedev <cit.>*Ch. 10. The equivalence of representation (<ref>) with the one in Gabcke <cit.> is shown in Arias de Reyna <cit.> where an alternative proof is also given. For σ= s>0 we have (s)=-2^s π^se^π i s/4/Γ(s)∫_0^∞ y^s-11-e^-π y^2+πω y/1-e^2πω y dy. The function H_ν(s) is an entire function with power series expansion <cit.>*eq.(10.4.3) (except when ν is a nonnegative integer in which case H_ν(z) are the usual Hermite polynomials) H_ν(z)=1/2Γ(-ν)∑_n=0^∞ (-1)^n Γ(n-ν/2)(2z)^n/n!. Hence we have for ν<0 (with easy justification) H_ν(z)=1/2Γ(-ν)∑_n=0^∞∫_0^∞ y^n-ν/2-1e^-y dy (-2z)^n/n!=1/2Γ(-ν)∫_0^∞ y^-ν/2e^-y-2z√(y)dy/y. Changing variables, y by π y^2 H_-s(x√(π))=π^s/2/Γ(s)∫_0^∞ y^se^-π y^2-2π xydy/y, s>0. So, (<ref>) implies that (s)=-2^s π^se^π i s/4/Γ(s)∫_-∞^∞e^-π x^2/1+e^-2πω x(∫_0^∞ y^se^-π y^2-2π xydy/y) dx, σ>0. Since ∫_0^∞ |y^se^-π y^2-2π xy|dy/y≤∫_0^∞ y^σe^-π y^2-2π xydy/y=Γ(σ)/π^σ/2H_-σ(x√(π)). By the asymptotic expansion (see Lebedev <cit.>*(10.6.6) and (10.6.7) for σ fixed and x∈ with |x|→+∞ we have H_-σ(x√(π))∼ (-x√(π))^-σ, x→+∞; -√(π)e^-π iσ/Γ(σ)e^π x^2(-x√(π))^σ-1, x→-∞. So, Γ(σ)/π^σ/2∫_-∞^∞|e^-π x^2/1+e^-2πω xH_-σ(x√(π))| dx<+∞, σ>0. By Fubini's Theorem we can change the order of integration. (s)=-2^s π^se^π i s/4/Γ(s)∫_0^∞ y^se^-π y^2(∫_-∞^∞e^-π x^2-2π xy/1+e^-2πω x dx)dy/y, σ>0. In Siegel's paper about Riemann's nachlass we find ∫_0↖1e^-π i u^2+2π i uy/e^π i u-e^-π i u du= 1/1-e^-2π i y-e^π i y^2/e^π i y-e^-π i y. Putting u=1/2+ω^3 x we obtain ∫_0↖1e^-π i u^2+2π i uy/e^π i u-e^-π i u du= e^π i y∫_-∞^∞e^-π x^2-2πω xy/1+e^-2πω x dx. It follows that ∫_-∞^∞e^-π x^2-2πω xy/1+e^-2πω x dx =-e^-π i ye^π i y^2-e^π i y/e^π i y-e^-π i y. Hence, for σ>0 (s)=-2^s π^se^π i s/4/Γ(s)∫_0^∞ y^se^-π y^2(-e^-πω ye^π y^2-e^πω y/e^πω y-e^-πω y)dy/y, σ>0. That is equivalent to (<ref>). The function F(z)=1-e^-π z^2+πω z/1-e^2πω z appearing in (<ref>) is entire. The zeros of the denominator are also zeros of the numerator. For y→+∞ we have |F(y)|∼ e^-π y√(2). The x-ray (see Figure <ref>) shows that the function is relatively small in |(z)|<π/4 and in the opposite quadrant. We see that there is a line of zeros along the lines that separate these quadrants on the other, where the function behaves as e^-π y^2. § SECOND INTEGRAL REPRESENTATION OF R(S) For σ<0 we have (s)=-ω e^-π is/4(1-e^π i s)∫_0^∞ y^-se^-π y^2+πωy/1-e^2πωy dy, σ<0. For σ>0, we have the representation (<ref>). For σ>1 it is easily proved that ∫_0^∞ y^s-11/1-e^2πω y dy=-e^-π i s/4(2π)^-sΓ(s)ζ(s). Therefore, for σ>1, (<ref>) can be written as (s) =ζ(s)+2^s π^se^π i s/4/Γ(s)∫_0^∞ y^s-1e^-π y^2+πω y/1-e^2πω y dy =ζ(s)+χ(s)e^-π i s/4(1+e^π i s)∫_0^∞ y^s-1e^-π y^2+πω y/1-e^2πω y dy, where χ(s) is the function that appears in the functional equation, it is given by χ(s)=(2π)^s/2Γ(s)cos(π s/2). In <cit.> it is proved that ζ(s)=(s)+χ(s)(1-s). Therefore, the above equation is equivalent to (1-s)=-e^-π i s/4(1+e^π i s)∫_0^∞ y^s-1e^-π y^2+πω y/1-e^2πω y dy Therefore, putting 1-s instead of s, we get for σ<0 (s)=-e^-π i(1-s)/4(1-e^-π i s)∫_0^∞ y^-se^-π y^2+πω y/1-e^2πω y dy Taking the complex conjugate of both members yields (s)=-e^π i(1-s)/4(1-e^π i s)∫_0^∞ y^-se^-π y^2+πωy/1-e^2πωy dy. We get (<ref>) by putting s instead of s. The equation (<ref>) is another way to write <cit.>*eq. (15). That is, it is equivalent to saying that for σ<0 we have (s)=ω e^π i s/4sinπ s/2∫_0^∞y^-se^-π y^2/sin(πω y) dy. § R(S) AT THE CRITICAL LINE Let F(z) be the function in the integrand of equation (<ref>), that is, F(z):=1-e^-π z^2+πω z/1-e^2πω z. We need several lemmas on this function. There is an absolute constant C such that for δ>0 |F(δ+ω x)|≤ C/δ, x≥0. By definition F(δ+ω x)=1-e^-π(δ^2+2δω x+ix^2)+πωδ+π i x/1-e^2πδω+2π i x. And |1-e^2πδω+2π i x|≥ e^π√(2)δ-1≥π√(2)δ. |1-e^-π(δ^2+2δω x+ix^2)+πωδ+π i x|≤ 1+e^-πδ^2-π√(2)δ x+πδ/√(2)≤ 1+e^-πδ^2+πδ/√(2)≤ 1+e^π/8. It follows that |F(δ+ω x)|≤1+e^π/8/π√(2)δ≤2/3δ. There is an absolute constant C such that for R>8 and 0<x<R we have |F(R+ix)|≤ C R. In any case we have |e^-π z^2+πω z|≤ 1. To see it, notice that -π z^2+πω z=-π (R^2+2iR x-x^2)+πω (R+ix). So, |e^-π z^2+πω z|=e^-π(R^2-x^2)+π/√(2)(R-x)=e^-π(R-x)(R+x+2^-1/2)≤ 1. Therefore, for |1-e^2πω z|≥1/R we have |F(R+ix)|≤2R. Next, assume that |1-e^2πω z|<1/R, and try to prove that, in this case, we also have |F(R+ix)|≤2R. For |w|<1/4, we have 1/2<|e^w-1/w|<4. If |1-e^w|<ϵ<1/8 we have, for some n∈ |w-2nπ i|=|log(1-(1-e^w))|≤∑_k=1^∞ϵ^k/k<1/4. Therefore, the points where |1-e^w|<ϵ, are of the form w=2nπ i +u, with |u|<2ϵ and n∈. Therefore, for R>8, |1-e^2πω z|<1/R implies 2πω z=2nπ i +u, with |u|<2/R. We will have z=nω+u/2πω, and e^2πω z=e^u. Then |1-e^-π z^2+πω z|=|1-e^-π (i n^2+nu/π+u^2/4π^2 i)+π i n+u/2|=|1-e^-nu-u^2/4π i+u/2| Since z=R+ix=nω+u/2πω, we have R=n/√(2)+u/2πω, then n=√(2)R-√(2)u/2πω. It follows that |-nu-u^2/4π i+u/2|≤|-√(2)u R+u√(2)u/2πω+i u^2/4π+u/2|≤ 2√(2)+4/√(2)π R^2+1/π R^2+1/R<4. And there is a constant such that |(e^z-1)/z|≤ C for |z|≤ 4. Therefore, we have for |1-e^2πω z|<1/R |1-e^-π z^2+πω z/1-e^2πω z|≤ C|-nu-u^2/4π i+u/2|/|1-e^u|≤ C|-nu-u^2/4π i+u/2|/|u|/2≤ 2C |-n-u/4π i+1/2|. Therefore, |F(R+ix)|≤ 2C(√(2)R+√(2)/π R+1/2π R+1/2)≤ 4CR. There is an absolute constant C such that |F(R+ix)|≤ Cmin(R,(R-x)^-1), for R≥ 8 and 0≤ x≤ R. The first inequality is true by Lemma <ref>. For the second inequality, notice that R+ix=(R-x)+x(1+i)=δ+iω y with δ=R-x and y=√(2)x. Therefore, by Lemma <ref>, we have |F(R+ix)|≤ C/(R-x). For s=1/2+it with t>0 we have (12+it)=e^π i/4(2π)^1/2+ite^-π t/2/Γ(1/2+it)∫_0^∞e^itlog x/√(x)e^-π i/2(x^2+x)sin(π/2(x^2-x))/sin(π x) dx. Let s=1/2+it, with t>0. By Cauchy's Theorem, for R>0, we have ∫_0^R+iRz^s-1F(z) dz=∫_0^R z^s-1F(z) dz+∫_R^R+iRz^s-1F(z) dz. For the first of these two integrals, by Proposition <ref>, we have lim_R→+∞∫_0^R z^s-1F(z) dz=∫_0^∞ z^s-1F(z) dz=-Γ(s)/(2π)^s e^π i s/4(s). For the second integral, we have the bound |I_2|:=|∫_R^R+iRz^s-1F(z) dz|≤∫_0^R|(R+ix)^-1/2+itF(R+ix)| dx. We have |(R+ix)^-1/2+it| =|exp((-12+it)(12log(R^2+x^2)+iarctan(x/R))|≤ (R^2+x^2)^-1/4 ≤ R^-1/2. By Lemma <ref> we have |I_2|≤ C R^-1/2(∫_0^R-1/Rdx/R-x+∫_R-1/R^R CR dx)≤ C R^-1/2(2log R+C). Therefore, the improper integral converges and ∫_0^∞ (ω x)^s-1F(ω x)ω dx=-Γ(s)/(2π)^s e^π i s/4(s). Notice that (ω x)^s-1=exp((-12+it)(log x+π i4)=e^-π t/4/√(x) e^itlog x-π i/8, and F(ω x)=1-e^-π i x^2+π i x/1-e^2π i x=-e^-π i/2(x^2-x)/e^π i xsin(π/2(x^2-x))/sin(π x)=-e^-π i/2(x^2+x)sin(π/2(x^2-x))/sin(π x). Substituting and reordering, we get (<ref>). For t>0 we have (12+it)=(1+ie^-π t)e^-2iϑ(t)∫_0^∞e^itlog x/√(x)e^-π i/2(x^2+x)sin(π/2(x^2-x))/sin(π x) dx By (<ref>) we only need to show that e^π i/4(2π)^1/2+ite^-π t/2/Γ(1/2+it)=(1+ie^-π t)e^-2iϑ(t). Putting s=1/2+it, we have t=i(1/2-s). The left-hand side is transformed into e^π i/4(2π)^1/2+ite^-π t/2/Γ(1/2+it)=(2π)^s e^π i s/2/Γ(s)=(1+e^π i s)(2π)^s/2Γ(s)cosπ s/2. From Titchmarsh <cit.>*4.17 we get e^-2iϑ(t)=χ(s), and χ(s)=(2π)^s/2Γ(s)cosπ s/2. § TRYING TO BOUND R(S) We have not been able to take advantage of integrals to bound (s). In this section, we expose one of these failed attempts. For s=1/2+it with t∈ and a∈ fixed, we have lim_R→+∞∫_R^R+iaz^s-11-e^-π z^2+πω z/1-e^2πω z dz=0. For |a|<R and z=R+ix with |x|<|a| we have |z^s-1|=|exp((-12+it)(log(R^2+x^2)^1/2+iarctanxR))| = exp(-12log(R^2+x^2)^1/2-tarctanxR)≤ R^-1/2e^π |t|/2. There is some R_0(a) such that for R>R_0(a) |1-e^-π z^2+πω z|=|1-e^-π(R^2+2iRx -x^2)+π1+i/√(2)(R+ix)| ≤ 1+e^-π(R^2-x^2)+π R/√(2)-π x/√(2)≤ 1+e^π a^2+π |a|/√(2)e^-π R^2+π R/√(2)≤ 2, and |1-e^2πω z|=|1-e^π√(2)(1+i)(R+ix)|≥ e^π R√(2)-π x√(2)-1≥ e^-π√(2)|a|e^π R√(2)-1≥ e^π R. Therefore, |∫_R^R+iaz^s-11-e^-π z^2+πω z/1-e^2πω z dz|≤ 2R^-1/2e^π |t|/2 e^-π R|a|. From the Lemma and Cauchy's Theorem it follows directly the next Proposition. For any a∈ we have (s)=-2^s π^se^π i s/4/Γ(s)∫_Γ_a z^s-11-e^-π z^2+πω z/1-e^2πω z dz, where Γ_a is the path composed of the segment [0,a] and the half-line [a,a+∞) parallel to the real axis. For any t∈ and ρ>0 we have (12+it)=e^-π t/2/Γ(1/2+it)e^itlog t(K_1+K_2), where K_1 =ω t^1/2∫_0^ρe^itlog x-itx/√(x)1-e^-it^2x^2/4π+itx/2/1-e^-itx dx =ω e^-itlog t∫_0^ρ te^itlog x-ix/√(x)1-e^-ix^2/4π+ix/2/1-e^-ix dx, K_2=(ωρ t)^1/2e^it(logρ-ρ)∫_0^∞e^π/4 t+itlog(ω+x)-tρω x/√(ω+x)1-e^-ρ^2t^2/4π(ω+x)^2+ρ t/2(i+ω x)/1-e^-ρ t(i+ω x) dx. Taking a=ρ tω/2π in Proposition <ref> we get (12+it)=-2^s π^se^π i s/4/Γ(s)(∫_0^ρ tω/2π z^s-11-e^-π z^2+πω z/1-e^2πω z dz+ ∫_ρ tω/2π^ρ tω/2π+∞ z^s-11-e^-π z^2+πω z/1-e^2πω z dz) In the first integral, putting z=tω/2π x with 0<x<ρ we get ∫_0^ρ tω/2π z^s-11-e^-π z^2+πω z/1-e^2πω z dz=∫_0^ρ(t/2π)^-1/2e^-π t/4e^itlogtx/2π-π i/8/√(x)1-e^-it^2x^2/4π+itx/2/1-e^itxtω/2π dx =-(t/2π)^1/2e^-π t/4e^itlogt/2π+π i/8∫_0^ρe^itlog x-itx/√(x)1-e^-it^2x^2/4π+itx/2/1-e^-itx dx In the second integral taking z=ρ t/2π(ω+x) with 0<x<+∞, we get ∫_ρ tω/2π^ρ tω/2π+∞-4mu z^s-11-e^-π z^2+πω z/1-e^2πω z dz= (ρ t/2π)^1/2e^itlogρ t/2π-10mu∫_0^∞e^itlog(ω+x)/√(ω+x)1-e^-ρ^2t^2/4π(ω+x)^2+ρ t/2(i+ω x)/1-e^ρ t(i+ω x) dx =-(ρ t/2π)^1/2e^itlogρ t/2π-iρ t∫_0^∞e^itlog(ω+x)-tρω x/√(ω+x)1-e^-ρ^2t^2/4π(ω+x)^2+ρ t/2(i+ω x)/1-e^-ρ t(i+ω x) dx. Joining this, we get the equality (12+it)=e^-π t/2/Γ(1/2+it)e^itlog t(K_1+K_2), where K_1 and K_2 are given by (<ref>) and (<ref>). By Stirling expansion, we have for t→+∞ e^-π t/2/Γ(1/2+it)=1/√(2π)e^-(itlog t-it)(1+(t^-1)). Therefore, the Lindelöf hypothesis is equivalent to K_1+K_2≪ t^ε. We will prove that K_2=(1), but this does not seem to bring us any closer to Lindelof's hypothesis. For t>0, ρ>1 and ρ t>π we have the bound |K_2|≤2^5/4/|sin(ρ t/2)|(πρ/ρ-1)^1/2. We have |1-e^-ρ t(i+ω x)|≥ 1-e^-ρ t x/√(2), and |1-e^-ρ t(i+ω x)|=|e^ρ i t/2-e^-ρ i t/2-ρ tω x|= |e^ρ i t/2-e^-ρ i t/2+e^-ρ i t/2(1-e^-ρ tω x)| 2|sin(ρ t/2)|-(1-e^-ρ t x/√(2)). From both we derive that in general |1-e^-ρ t(i+ω x)|≥ |sin(ρ t/2)|. Also, |1-e^-ρ^2t^2/4π(ω+x)^2+ρ t/2(i+ω x)|≤ 1+e^-ρ^2t^2/4π√(2)x-ρ^2t^2/4πx^2+ρ t /2√(2)x. The exponent is maximum at x=π-ρ t/√(2) ρ t Assuming ρ t>π we have -ρ^2t^2/4π√(2)x-ρ^2t^2/4πx^2+ρ t /2√(2)x<0 for x>0 and therefore |1-e^-ρ^2t^2/4π(ω+x)^2+ρ t/2(i+ω x)|≤ 2. It follows that |K_2|≤ (ρ t)^1/22/|sin(ρ t/2)|∫_0^∞e^π t/4-tarctan1/1+x√(2)-tρ x/√(2)/√(|ω+x|) dx It is easy to show that for x>0 π/4-arctan1/1+x√(2)-ρ x/√(2)≤1-ρ/√(2)x. Hence assuming ρ>1 and t>0 we get ∫_0^∞e^π t/4-tarctan1/1+x√(2)-tρ x/√(2)/√(|ω+x|) dx≤∫_0^∞e^t(1-ρ) x/√(2)/√(x) dx=2^1/4√(π)/√(t(ρ-1)). 999 AS M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, National Bureau of Standards Appl. Series, vol 55, U. S. Government Printing Office, Washington, DC, 1964. A166 Arias de Reyna, J., Riemann's auxiliary function. Basic results, https://arxiv.org/abs/2406.02403arXiv:2406.02403. A187 Arias de Reyna, J., An integral representation of (s) due to Gabcke, https://arxiv.org/abs/2407.01028arXiv:2407.01028. G https://arxiv.org/abs/1512.01186W. Gabcke, A Parabolic Cylinder Function in the Riemann-Siegel Integral Formula, arXiv: 1512.01186v1, 6pp. (2015). L N. N. Lebedev, Special functions and their applications, Revised ed., translated from the Russian and ed. by Richard A. Silverman. Dover Publ. Inc., New York 1972. T E. C. Titchmarsh The Theory of the Riemann Zeta-function, Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986.
http://arxiv.org/abs/2407.03125v1
20240703140741
Foundations and Frontiers of Graph Learning Theory
[ "Yu Huang", "Min Zhou", "Menglin Yang", "Zhen Wang", "Muhan Zhang", "Jie Wang", "Hong Xie", "Hao Wang", "Defu Lian", "Enhong Chen" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Journal of Class Files Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals § ABSTRACT Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures. Notably, Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm. With these models being usually characterized by intuition-driven design or highly intricate components, placing them within the theoretical analysis framework to distill the core concepts, helps understand the key principles that drive the functionality better and guide further development. Given this surge in interest, this article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models. Encompassing discussions on fundamental aspects such as expressiveness power, generalization, optimization, and unique phenomena such as over-smoothing and over-squashing, this piece delves into the theoretical foundations and frontier driving the evolution of graph learning. In addition, this article also presents several challenges and further initiates discussions on possible solutions. Graph machine learning, graph neural network, learning theory, generalization, expressive power, optimization. Foundations and Frontiers of Graph Learning Theory Yu Huang, Min Zhou, Menglin Yang, Zhen Wang, Muhan Zhang, Jie Wang, Hong Xie, Hao Wang, Defu Lian, and Enhong Chen, Fellow, IEEE Y. Huang, J. Wang, H. Xie, H. Wang, D. Lian and E. Chen are with the University of Science and Technology of China, Hefei, Anhui, 230027. E-mail:hy123123@mail.ustc.edu.cn, {jiewangx, hongx87, wanghao3, liandefu, cheneh}@ustc.edu.cn M. Zhou is with Huawei and the work is independent to the position or resource in the company. E-mail: zhoum1900@163.com. M. Yang is with Chinese University of Hong Kong. E-mail: menglin.yang@outlook.comZ. Wang is with Sun Yat-sen University. E-mail:joneswong.ml@gmail.comM. Zhang is with Peking University. E-mail: muhan@pku.edu.cn Received XXX; accepted YYY ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Real-world datasets can be naturally represented as graphs, where nodes represent entities interconnected by edges denoting relationships. Graph related tasks encompass a broad spectrum, spanning node classification <cit.>, link prediction <cit.>, graph classification/regression <cit.>, as well as generation tasks <cit.>. The applications of these tasks extend to diverse areas such as property prediction in molecules <cit.>, traffic analysis <cit.>, social network analysis <cit.>, physics simulations <cit.>, and combinatorial optimization <cit.>. Solving these diverse tasks demands a sufficiently rich embedding of the graph or node that captures structural properties as well as attribute information. While graph embeddings have been a widely-studied topic, including spectral embeddings and graph kernels, recently, Graph Neural Networks (GNNs) have emerged as an empirically and broadly successful model class that, as opposed to, e.g., spectral embeddings, allows to adapt the embedding to the task at hand, generalizes to other graphs of the same input type, and incorporates attributes. The objective of graph learning entails the discovery of a function that can approximate the target function based on the available information of a given graph. This process involves several sequential steps. Firstly, it identifies a comprehensive set of functions, such as graph neural networks, capable of representing the target function with sufficient expressiveness. Subsequently, the function that provides the best approximation of the target function is estimated by minimizing a designated loss function (such as noise contrastive estimation loss or cross-entropy). Finally, the estimated function from the previous step is utilized to predict outcomes on test data, resulting test errors that composed of error accumulated in the above steps. In crafting an optimal graph model, three pivotal considerations typically shape the approach: min_F ∈ℱ 𝔼_(G,y) ∼𝒫[ ℓ( G,y, F(G)) ] ≡min_F ∈ℱ ℛ(F). * Expressiveness also known as representation power explores if target functions can be approximated well by a graph model. For functions on graphs, representational power has mainly been studied in terms of graph isomorphism, i.e., studying the graphs that a designed model can distinguish or not. Topics to this question include graph isomorphism testing, subgraph counting, representing invariance/equivariance under permutations, etc. * Generalization asks how well the estimated function is performing according to the population risk, as a function of the number of data points and model properties. To quantify the generalization ability, the generalization error bound is a good measurement. Typically, generalization analyses involve the complexity of the model class, the target function, the data, and the optimization procedure. * Optimization investigates conditions under which training algorithms like gradient descent or training tricks can be shown to be provably effective in terms of faster convergence or good generalization of a given GNN. Possible solutions are training tricks or strategies to ensure the algorithm converges to the global or acceptable local minima as soon as possible. By carefully addressing one or more of the aforementioned aspects mentioned-above, considerable graph learning models have emerged daily, as consolidated in references <cit.>. Since the models evolve to be characterized by highly intricate network architectures to meet the diverse scenario and high performance needs, the importance of comprehending the fundamental principles underlying these models becomes evident. Regarding the rapid growth of theoretical analysis on graph models, there are fragments in the area <cit.>, with overlap in certain subjects covered but different focuses. A holistic landscape of the progress and advancements in graph learning is still vacant. For instance, Jegelka <cit.> summarizes a selection of emerging theoretical results on approximation and generalization properties of messaging passing GNNs, with emphasis on the mathematical connections, serving for the mathematicians. Regarding expressiveness, Morris et al. <cit.> provide an extensive review on algorithms and neural architectures based on the Weisfeiler-Leman algorithm, a well-known heuristic for the graph isomorphism problem. Zhang et al. <cit.> further broaden the regime to popular GNN variants through the lens of graph bi-connectivity. In the domain of deep learning, training strategies and hyper-parameter selections play crucial roles. However, current surveys on optimization in deep learning primarily focus on general machine learning or neural networks operating with Euclidean data <cit.>. There is a clear need for a comprehensive summary of optimization strategies tailored specifically for graph structures to enhance the performance and efficiency of graph neural networks and related models. Very recently, Morris et al. <cit.> spotlight the future directions in the theory of graph machine learning that is most related to this article while the authors only provide a brief summary of existing works. To fill the gap, this article delves into the aforementioned three aspects by giving precise mathematical settings and summarizing theoretical tools as well as current progress, offering readers resource on the design principles in the context of graph-structured data. In practical applications, the ideal graph models exhibit strong expressive power, minimal generalization error, as well as rapid and stable convergence speed. In theoretical analysis, however, the three subjects are often explored independently to derive possible results or insights. For each topic, we first explain their respective goals and basic concepts, then provide a detailed classification of the methods used with relevant theoretical results. Finally, we establish their interconnections, suggesting potential research directions for future graph learning theories. In addition to common fundamental aspects shared by typical graph models, issues such as performance degradation in deeper Graph Neural Networks (GNNs) and the information bottleneck stemming from long-range dependencies, known as over-squashing, are critical phenomena. Addressing and mitigating these challenges are pivotal for improving the efficacy and robustness of GNNs, particularly in tasks necessitating intricate hierarchical representations of graph-structured data. Given that resolve these two issues involves a multifaceted approach, they are detailed in a separate section for a comprehensive review. Different from previous surveys focusing more on mathematical connections, we aim to summarize key principles that drive the functionality and address the unique challenges and requirements posed by graphs in machine learning tasks. We will try to avoid too many technicalities and make the survey serving for both theorists as well practitioners in various fields. The reminder of this work unfolds as follows: Section <ref> gives a summary of fundamental methods in graph learning. Section <ref> to Section <ref> details diverse theoretical analyses and results of expressive power, generalization, and optimization, respectively. The formulation and solutions of over-smoothing and over-squashing are documented in Section <ref>. The paper concludes in Section <ref>. § PRELIMINARY Graph embedding, graph kernels, and GNNs are fundamental approaches for representing and analyzing graph-structured data. While graph embedding and graph kernels have been effective in representing and analyzing graph-structured data, they face challenges in capturing complex graph interactions and may require pre-processing steps. GNNs address these limitations by combining the power of neural networks with the expressive capacity of graphs, enabling end-to-end learning of node and graph representations. Recently, Graph Transformers have emerged as an advanced technique in graph learning, applying self-attention mechanisms to capture long-range dependencies between nodes while incorporating graph structural information. These advancements have opened up new possibilities for understanding graph-structured data in various domains. The following subsections provide a detailed overview of these approaches. §.§ Graph Embedding and Graph Kernels Graph embedding and graph kernels are two fundamental approaches for representing and analyzing graph-structured data. Graph embedding converts an input graph 𝒢 = (V, E) into a low-dimensional vector space, capturing essential graph properties such as connectivity, community structures, or clustering patterns. Graph embedding methods can be categorized into matrix factorization-based methods (e.g., GF <cit.>, GraRep <cit.>), random walk-based methods (e.g., DeepWalk <cit.>, node2vec <cit.>), and deep learning-based methods (e.g., SDNE <cit.>, DNGR <cit.>). While effective in capturing various graph properties, graph embedding techniques may not fully capture complex interactions and dependencies in graphs. Graph kernels, a subset of kernel functions, quantify the similarity between pairs of graphs by computing the inner product between their feature representations in a high-dimensional space. The Weisfeiler-Lehman (WL) subtree kernel <cit.> has been highly influential, inspiring extensions such as the WL Optimal Assignment kernel <cit.> and the WL shortest-path kernel <cit.>. Shortest-path kernels <cit.> measure similarity based on the properties of shortest paths, considering both path lengths and vertex attributes. Random walk kernels <cit.> assess similarity by comparing label sequences of random walks on graphs. Graph kernels provide a comprehensive measure of graph similarity by considering both structure and label information, making them suitable for various graph comparison tasks. Despite their strengths, graph embedding and graph kernels face challenges in capturing complex graph interactions and may require pre-processing steps. GNNs address these limitations by combining the power of neural networks with the expressive capacity of graphs, enabling end-to-end learning of node and graph representations. §.§ Graph Neural Networks GNNs have emerged as a powerful framework for analyzing graph-structured data, leveraging the message-passing mechanism to enable nodes to iteratively update their representations by aggregating information from their neighbors. GNNs can be broadly categorized into three types: spectral GNNs, spatial GNNs, and geometric GNNs based on the way they operate on graph-structured data. Spectral GNNs operate on the spectral representation of the graph, which is obtained by the eigendecomposition of the graph Laplacian matrix <cit.>. Spectral convolutions are defined in the Fourier domain and the graph Fourier transform of a signal x is given by ℱ(x)=𝐔^T x, and the inverse transform is ℱ^-1(x̂)=𝐔x̂. Then, the graph convolution of x with a filter 𝐠 is defined as: x *_𝒢𝐠 = 𝐔(𝐔^T x ⊙𝐔^T 𝐠), where ⊙ denotes element-wise multiplication. Several spectral GNN variants have been proposed, including Spectral CNN <cit.>, ChebNet <cit.>, and GCN <cit.>. However, spectral GNNs face challenges in generalizing across different graph structures and suffer from high computational complexity. Spatial GNNs operate directly on the graph structure, leveraging spatial relationships between nodes to perform graph convolutions using a message-passing mechanism. The Message Passing Neural Network (MPNN) <cit.> provides a general framework for spatial GNNs, which is defined as: x_v^(k) = U_k(x_v^(k-1), ∑_u∈𝒩(v) M_k(x_v^(k-1), x_u^(k-1), x^e_vu)). where x_v^(k) represents the embedding of node v at layer k, 𝒩(v) denotes the neighborhood of node v, M_k(·) is the message function that computes the message from node u to node v, U_k(·) is the update function that updates the node embedding based on the aggregated messages, and 𝐱^e_vu represents the edge features between nodes v and u (if available). GraphSAGE <cit.> addresses the inductive and scalability by employing neighborhood sampling: x^(k)_v=ϕ(𝐖^(k)·AGG({x_u^(k-1), ∀ u ∈ S_𝒩(v)})), where S_𝒩(v) represents the sampled neighborhood of node v, AGG(·) is the aggregation function, and ϕ(·) is an activation function. Graph Attention Network (GAT) <cit.> introduces attention mechanisms to learn the relative importance of neighboring nodes: x_v^(k) = ϕ(∑_u∈𝒩(v)∪ vα_vu^(k)𝐖^(k)x_u^(k-1)), where α_vu^(k) represents the attention weight assigned to the edge between nodes v and u at layer k. Graph Isomorphism Network (GIN) <cit.> introduces an adjustable weight parameter to better capture structural information: x_v^(k) = MLP((1+ϵ^(k))x_v^(k-1)+∑_u∈𝒩(v)x_u^(k-1)), where ϵ^(k) is a learnable parameter that adjusts the weight of the central node's own features. Geometric GNNs operate on graphs with additional geometric features, such as node positions and edge lengths or angles, to capture the spatial relationships between nodes and edges in graph-structured data. These additional geometric features are particularly relevant in domains like molecular modeling, where the spatial arrangement of atoms and bonds plays a crucial role in determining the properties and behavior of molecules. By leveraging the geometric information, Geometric GNNs can learn more expressive and informative representations compared to standard GNNs. However, when dealing with geometric graphs, Geometric GNNs must also consider the symmetries and equivariances present in the data. To address these challenges, several Geometric GNN architectures have been proposed. Directional Message Passing Neural Networks (DMPNNs) <cit.> extend the message passing framework by incorporating directional information based on the relative positions of nodes in space. This allows the model to capture the spatial relationships between nodes and learn direction-aware representations. Equivariant Graph Neural Networks (EGNNs) <cit.> are designed to be equivariant to rotations and translations of the input graph. They achieve this by using equivariant message passing operations and representing node features as high-dimensional vectors that transform according to the group of symmetries. GNNs have emerged as a powerful and effective framework for analyzing graph-structured data. However, GNNs also face challenges such as the over-smoothing problem, over-squashing issue, and difficulty in capturing long-range dependencies. Despite these limitations, ongoing research aims to address these challenges and further advance the field of GNNs, enabling their application to a wide range of real-world problems involving graph-structured data. §.§ Graph Transformer Graph Transformers aim to leverage the power of self-attention mechanisms to capture long-term dependencies among nodes while incorporating graph structural information. Existing graph transformer models can be categorized into the following three groups. Designing the architecture of graph Transformer. Depending on the relative positions of GNN and Transformer layers, three typical designs have been proposed: (a) building Transformer blocks on top of GNN blocks, e.g., GraphTrans <cit.>, GraphiT <cit.> and Grover <cit.>, (b) alternately stacking GNN and Transformer blocks, e.g., Mesh Graphormer <cit.>, and (c) running GNN and Transformer blocks in parallel and combining their outputs, e.g., Graph-BERT <cit.>. Improving positional embeddings with graph structural information. For example, Graph Transformer <cit.> proposes to use Laplacian eigenvectors as positional embeddings, which are defined by the eigendecomposition of the graph Laplacian matrix. Other methods, such as Graphormer <cit.> and Graph-BERT <cit.>, propose to use degree centrality and other heuristic measures as additional positional embeddings. Modifying attention mechanisms based on graph priors. The third group of Graph Transformers aims to modify the attention mechanisms by injecting graph priors. One common approach is to mask the attention matrix based on the graph structure, allowing each node to attend only to its neighbors <cit.>. Another approach is to add graph-based bias terms to the attention scores, such as the spatial bias in Graphormer <cit.> and the proximity-enhanced multi-head attention in Gophormer <cit.>. Graph Transformer models have achieved remarkable success in various domains. However, the optimal way to incorporate graph information into Transformers remains an open question, and the choice of architecture may depend on the specific task and dataset at hand. § EXPRESSIVE POWER In deep learning theory, the term “expressive power” is often used interchangeably with function approximation capability. However, defining the expressive power of Graph Neural Networks (GNNs) in graph learning proves challenging due to the intricate nature of graph-related tasks. Some studies, inspired by deep learning methodologies, explore functions that GNNs can effectively approximate. Alternatively, a conventional approach involves assessing the capacity of GNNs to differentiate non-isomorphic graphs, a fundamental challenge in graph learning. Additionally, certain research endeavors connect the expressive power of GNNs to combinatorial problems or the computation of specific graph properties. These investigations are intimately linked and offer valuable insights into understanding the expressive capabilities of GNNs within the context of graph-based tasks. In this section, we will elaborate on the theory of the expressive power of GNNs. The hierarchy of the WL algorithm for graph isomorphism problem is the most intuitive measurement and it is the mainstream approach to describe and compare the expressive power of different GNN models. From this point, there are various methods to devise expressive GNNs that are more powerful than 1-WL and we provide their corresponding theoretical results. Finally, we return to the approximation ability that is fundamental for the expressive power of neural networks in deep learning to analyze the universality of GNN. §.§ Notations Before reviewing the the theory of expressive power, we introduce some basic notations. 𝒢=(V,E) denotes a graph where V={v_1,v_2,…,v_n} is the node set and E⊆ V× V is the edge set. 𝐀∈{0,1}^N× N denotes the adjacency matrix and 𝐀̃ = 𝐀 + 𝐈 denotes the adjacency matrix considering self-loops. The Laplacian matrix of an undirected graph is defined as 𝐋=𝐃-𝐀 where 𝐃∈ℝ^N× N is the degree matrix of 𝐀 with 𝐃_ii=∑_j=1^N𝐀_ij. The degree matrix and Laplacian matrix of 𝐀̃ is denoted as 𝐃̃ and 𝐋̃=𝐃̃-𝐀̃ respectively. 𝐀=𝐃̃^-1/2𝐀̃𝐃̃^-1/2 denotes the normalized 𝐀̃. If available, 𝐗 denotes the initial feature matrix of the nodes and x_v^(l) denotes the embedding of node v in l-th layer. 𝒩(v) denotes the neighbors of v. {…} denotes the sets while {{…}} denotes the multi-sets. §.§ Graph isomorphism problem and WL algorithm The graph isomorphism problem involves determining whether two graphs, 𝒢_1 and 𝒢_2, are structurally identical. Formally, two graphs are considered isomorphic if a bijection exists between their nodes such that edges in 𝒢_1 are preserved in 𝒢_2. To deal with the graph isomorphism problem, the Weisfeiler-Lehman (WL) algorithm <cit.> is a well-known heuristic algorithm that can be implemented effectively. Its classical form, 1-dimensional Weisfeiler-Lehman (1-WL), also known as color refinement algorithm assigns a label to each node initially and iteratively updates the label via aggregating information from its neighbors. The procedure of the 1-WL algorithm is given in Algorithm <ref>, where the HASH function plays the most crucial part as it is required to be injective to guarantee that the different neighborhood information can map to different labels. The Figure <ref> provides an illustration of 1-WL in distinguishing non-isomorphic graphs within 2 iterations. Initially, two non-isomorphic graphs G_1 and G_2 are given without node features, thus the embedding of each node is set to be identical. In the first iteration, each node pass the embedding of itself together with the multi-set of embedding of the neighbor nodes through an injective HASH function, which obtain the novel embedding representing the degree of the node. Since the two graph G_1 and G_2 have the same degree distribution, single iteration of 1-WL cannot distinguish them. In the second iteration, the same operation is implemented again but on novel embeddings. This time the two graphs G_1 and G_2 get different node embedding distributions so the algorithm outputs 'non-isomorphic', which means that two non-isomorphic graphs G_1 and G_2 are distinguished by the 1-WL in the second iteration. The 1-WL algorithm terminates in 𝒪(|U|+|V|) iterations and has been shown to effectively distinguish any pair of non-isomorphic random graphs with high probability as the graph size approaches infinity. However, it may struggle to differentiate certain classes of non-isomorphic graphs, like regular graphs of the same order. To address this limitation, more powerful algorithms capable of distinguishing a broader range of non-isomorphic graphs are desired. One such advancement is the k-dimensional Weisfeiler-Lehman (k-WL) algorithm, which extends the capabilities of the 1-WL algorithm by assigning labels to each k-tuple of nodes and the set of k-tuple of nodes is denoted as V^k. In the k-WL algorithm, the i-th neighbor of a k-tuple is defined by replacing the i-th element in the k-tuple with every node in the graph. This approach enhances the expressive power of the algorithm compared to 1-WL, allowing for more robust differentiation between complex graph structures. Additionally, the k-dimensional folklore Weisfeiler-Lehman (k-FWL) algorithm is another extension that shares similarities with k-WL but differs slightly in the aggregation of k-tuples. These advancements in the Weisfeiler-Lehman framework offer improved capabilities for distinguishing non-isomorphic graphs and contribute to enhancing the overall performance and versatility of graph isomorphism testing algorithms. The procedure of the k-WL is given in Algorithm <ref>. Even though the k-WL is more powerful than 1-WL and actually increasing k can obtain more powerful algorithm in distinguishing non-isomorphic graphs, the algorithm has its limitation since there always exists a pair of non-isomorphic graphs for each k such that the k-WL algorithm outputs 'cannot determine'. At the end of the subsection, we provide some useful results about the expressive power of WL algorithm and its variants <cit.> which will be utilized later: * 1-WL and 2-WL have equal expressive power. * k-FWL is equivalent to (k+1)-WL and thus has equal expressive power. * For k≥ 2, (k+1)-WL is strictly more powerful than k-WL which means that there exists a pair of non-isomorphic graphs that (k+1)-WL can distinguish but k-WL can not. This implies that the WL algorithm naturally forms a hierarchy. §.§ Connect GNN with 1-WL It is noticed that the role of neighbor aggregation and update in GNN is analogous to that of the hash function operated on one node and its neighbor in 1-WL. Inspired by this close relationship, Xu et al. <cit.> first study the expressive power of GNNs with a theoretical framework by viewing the message passing on a node as a rooted subtree and representing the set of its neighbor feature by a multiset. In this way, the aggregation function of GNN can be seen as functions defined on multisets. The authors compare the expressive power between MPNN and 1-WL in distinguishing non-isomorphic graphs and conclude MPNN is at most as powerful as the 1-WL, which is formally given as follows: Let 𝒢_1 and 𝒢_2 be any two non-isomorphic graphs. If a message passing GNN maps 𝒢_1 and 𝒢_2 to different embeddings, the 1-WL also decides 𝒢_1 and 𝒢_2 to be non-isomorphic. Since the aggregation and readout function in GNN are not necessarily injective, the Theorem <ref> holds. Further, they prove that if the neighbor aggregation function and graph readout function are injective, the GNN is exactly as powerful as 1-WL. Based on the condition and theory of multisets, they devise a novel GNN architecture named Graph Isomorphism Network (GIN) that is exactly as powerful as the 1-WL. Graph Isomorphism networks(GINs) <cit.>: x_v^(k)=MLP^(k)((1+ϵ^(k))x_v^(k-1)+∑_u∈𝒩(v)x_u^(k-1)), where ϵ is a learnable scalar value. It is noted that the sum aggregators in Equation <ref> over MLP are universal functions over multiset and thus can represent injective functions and be adopted as the injective graph readout function in graph classification as well. Concurrently, Morris et al. <cit.> also prove that MPNNs are at most as powerful as the 1-WL. In addition, they state that MPNNs can have the same representation power of 1-WL with proper choosing of the parameter matrices. §.§ GNNs beyond 1-WL Although previous works have built GNNs that are as powerful as 1-WL, they have weaknesses due to the limited expressive power of 1-WL. For example, they cannot distinguish some pairs of graphs with any parameter matrix unless they have identical labels. More severely, they fail to count simple graph substructures such as triangles <cit.> which is of great importance in computational chemistry <cit.> and social network analysis <cit.>. Therefore, many works try to devise GNNs beyond the expressive power of 1-WL. §.§.§ High-order GNNs k-WL based. One straightforward way to build GNNs beyond 1-WL is resorting to k-WL. Morris et al. <cit.> propose k-GNNs based on set k-WL algorithm which is a variant of k-WL. Literally, they consider the subgraphs of k nodes instead of k-tuple of nodes to make the algorithm more efficient and take the graph structure information into account. To be specific, the set containing all k-sets of V is denoted as [V]_k = {S⊆ V| |S| =k} which are the subgraphs of k nodes. In addition, the neighbor of a k-set S is defined as the k-set with exactly one different node i.e. 𝒩_V,k(S)={J ∈ [V]_k | |J∩ S|=k-1}. Although the set k-WL is weaker than k-WL, it has its own advantage that is more powerful than 1-WL and more scalable than k-WL. In k-GNNs, the embedding of the subgraph S of k nodes in layer t is denoted by x_k^(t)(S) and the initial feature assigned to each subgraph induced by S represents the isomorphic type of the corresponding subgraph. Then the embeddings of k-GNNs can be updated by a message passing scheme according to the Equation (<ref>). x_k^(t)(S) = σ(𝐖_1^(t) x_k^(t-1)(S)+∑_U∈𝒩_V,k(S)𝐖_2^(t)x_k^(t-1)(U)). Since the set k-WL is more powerful than 1-WL, the k-GNN is more powerful than MPNNs and has proven to be as powerful as set k-WL with suitable initialization of parameter matrices. The expressive power of the k-GNN is characterized by Theorem <ref> given as follows. Let 𝒢_1 and 𝒢_2 be any two non-isomorphic graphs. If a k-GNN maps 𝒢_1 and 𝒢_2 to different embeddings, the k-set WL also decides 𝒢_1 and 𝒢_2 to be non-isomorphic. Invariant and equivariant layer based. Graph Neural Networks constructed with high-order tensors offer a novel strategy to address the constraints associated with the 1-WL algorithm. Demanding the representation of a graph remain unchanged under permutations of nodes (invariance) and ensuring node representations reflect consistent transformations corresponding to node reordering (equivariance) respectively, invariance and equivariance stand as pivotal tenets in invariant graph learning. We use S_n to denote the symmetry group acting on [n]={1,2,…,n} and ℝ^n^k to denote the set of k-order tensors. For a tensor X∈ℝ^n^k and a permutation σ∈ S_n, we define the permutation on the tensor as (σ· X)_σ(i_1),σ(i_2),…,σ(i_k)=X_i_1,i_2,…,i_k. Then the invariant and equivariant functions can be defined formally as follows. A function f:ℝ^n^k→ℝ is said to be invariant if f(σ· X)=f(X) for every permutation σ∈ S_n and every X ∈ℝ^n^k. A function f:ℝ^n^k→ℝ^n^l is said to be equivariant if f(σ· X)=σ· f(X) for every permutation σ∈ S_n and every X ∈ℝ^n^k.[if l≠ k, σ also needs to be mapped to a group representation in the target space] Note that, in graph learning, the X∈ℝ^n^k is the tensor representation of the graph, and each k-tuple (i_1,i_2,…,i_k) can be seen as a hyperedge in the graph. For example, for k=2, the adjacency matrix is a 2-order tensor representation of the graph and X_ij indicates the existence of edge (i,j). When attaching a feature vector of dimension d to each hyperedge, the tensor is represented by X∈ℝ^n^k× d. Since the permutation is only defined on node indices, i.e. , (σ· X)_σ(i_1),σ(i_2),…,σ(i_k),i_k+1)=X_i_1,i_2,…,i_k,i_k+1, the invariant function f:ℝ^n^k× d→ℝ and equivariant function f:ℝ^n^k× d→ℝ^n^l × d follow similar modification in definition. Maron et al. <cit.> provide a full characterization of all linear invariant and equivariant layers acting on a k-order tensor for the first time by solving the fixed-point equation of the permutation matrix group. Specifically, for layers devoid of bias and features, the dimension of the linear invariant and equivariant layer space is precisely given as follows: The space of invariant linear layer L:ℝ^n^k→ℝ and equivariant linear layer L:ℝ^n^k→ℝ^n^l are of dimension b(k) and b(k+l)) respectively, where b(k) is the k-th Bell number that represents the number of ways a set of n elements can be partitioned into non-empty subsets. From Theorem <ref>, it is surprising to find that the dimension of the space is independent of the size of the graph which enables us to apply the same GNN with a given order of linear invariant and equivariant layers to graphs of any size. The Theorem <ref> can be further generalized to the layers with bias and features or multi-node sets and derive similar results. For more detailed information, readers can refer to the original paper. With the formula of all linear invariant and equivariant layers, Maron et al. <cit.> prove that the GNN built by the layers can approximate any message passing network to an arbitrary precision on a compact set, which implies that the proposed model is at least as powerful as MPNN. Further, they proposed a new GNN architecture called k-order invariant graph network F: F=m∘ L_I ∘ L_d ∘ϕ∘…∘ϕ∘ L_1, where L_i:ℝ^n^k_i× a_i→ℝ^n^k_i+1× a_i+1,max_i∈ [d+1]k_i=k are equivariant linear layers, a_i denotes the dimension of feature in the l-th layer, ϕ is an activation function, L_I:ℝ^n^k_d+1× a_d+1→ℝ^a_d+2 is an invariant linear layer and m:ℝ^a_d+2→ℝ^a_d+3 is a multilayer perception. The k-order GNN F is provably able to encode the multisets computed in the k-WL with a suitable weight matrix and thus can implement the k-WL, which leads to the Theorem <ref>. Given two graphs 𝒢_1 and 𝒢_2. If the two graphs can be distinguished by the k-WL, there exists a k-order network F such that F(𝒢_1)≠ F(𝒢_2). On the other direction for every two isomorphic graphs G_1 and G_2 and a k-order network, we have F(𝒢_1)=F(𝒢_2). Theorem <ref> indicates that the k-order GNN is at least as powerful as the k-WL in terms of distinguishing non-isomorphic graphs. However, the k-order GNN is impractical for k≥ 3 because of the 𝒪(n^k) memory cost. Therefore, the authors propose a simple GNN model based on 2-FWL that is as powerful as 3-WL while only utilizing tensors of order 2. The proposed model replaces the equivariant linear layers and activation functions with specific blocks and the architecture is given as follows. F=m∘ L_I ∘ B_d ∘ B_d-1∘…∘ B_1 In each block B_i, the authors apply three MLPs that implement matrix multiplication to match the feature then concatenate the embedding to obtain the output tensor. They prove that the matrix multiplication can implement the aggregation used in 2-FWL to boost the expressive power. The technique can be further generalized to k-order GNNs to make them as powerful as (k+1)-WL. The expressive power of the proposed model named PPGN is presented in Theorem <ref>. Given two graphs 𝒢_1 and 𝒢_2. If the two graphs can be distinguished by the 2-FWL (3-WL), there exists a GNN F defined by Equation <ref> such that F(𝒢_1)≠ F(𝒢_2). On the other direction for every two isomorphic graphs 𝒢_1 and 𝒢_2 and the model F defined by Equation <ref>, we have F(𝒢_1)=F(𝒢_2). Besides, it is noteworthy that some works devise powerful GNNs with polynomial layers that are also able to preserve the invariant and equivariant property <cit.>. Recently, Puny et al. <cit.> formalize the equivariant graph polynomial that is a matrix polynomial map equivariant to node permutation. The authors further propose the polynomial hierarchy that alleviate some problems of WL hierarchy and provide a full characterization of all the graph polynomials. Equipping with the polynomial features, the PPGN <cit.> can be strictly more powerful than 3-WL while only costing O(n^2) memory. Local and sparsity-aware high-order GNN. A key limitation of prior approaches of high order GNNs is the heavy computation and memory cost. The methods based on k-WL consider all the tuples or subgraphs of k nodes while linear invariant or equivariant layers are defined in the tensor of k-order, which all require O(n^k) memory. Besides, the model defined by Equation <ref> faces similar challenges due to the multiplication operation of dense matrices. Hence, certain studies propose adaptations of the k-Weisfeiler-Lehman (k-WL) algorithm that focus on a particular category of k-node objects and establish neighborhood relationships locally, aiming to achieve a balance between expressive power and scalability. For instance, Morris et al. <cit.> introduce δ-k-LWL and corresponding δ-k-LGNN that consider the local and global neighbors. They <cit.> further propose (k,s)-LWL and corresponding (k,s)-SpeqNet that only act on k-tuples which induces subgraphs of at most s connected components to reduce the computation cost greatly. The (k,c)(≤)-SETWL and (k,c)(≤)-SETGNN proposed by Zhao et al. <cit.> share a similar idea but they utilize k-sets instead of k-tuples. Besides, Wang et al. <cit.> introduce 𝒩(t,d)-WL and devise corresponding GNN architecture G3N that aggregates the induced subgraphs of k nodes within the d-hop neighborhood of a node to fit real-world tasks better. These methods provide strong theoretical foundations in establishing a unique hierarchy for distinguishing non-isomorphic graphs and exhibit efficient applicability to real-world tasks. §.§.§ Graph property based GNNs Substructure based GNN. In addition to the challenges in distinguishing non-isomorphic graphs, GNNs also encounter obstacles in quantifying simple substructures like triangles and cliques <cit.>, which is of great importance in various real applications such as drug discovery <cit.> and social network studies <cit.>. Therefore, the capability to detect and count substructures serves as an intuitive metric to evaluate the expressive power of GNNs. Chen et al. <cit.> initiate the exploration by providing a theoretical framework for studying the expressive power of GNNs via substructure counting. Specifically, they define two types of counting on attribute graphs: containment-count and matching-count, representing the number of subgraphs and induced subgraphs isomorphic to a specified substructure respectively. Let 𝒢^P be a graph that we refer to as a pattern or substructure. We define 𝒞(𝒢,𝒢^P), called the containment-count of 𝒢^P in 𝒢, to be the number of subgraphs of 𝒢 that are isomorphic to 𝒢^P. We define ℳ(𝒢,𝒢^P), called the matching-count of 𝒢^P in 𝒢, to be a number of induced subgraphs of 𝒢 that are isomorphic to 𝒢^P. Since the induced graphs belong to graphs, ℳ(𝒢,𝒢^P)≤𝒞(𝒢,𝒢^P) always holds. With this framework, they analyze the previous GNN architectures and WL algorithm concerning the two substructure counting criteria. The derived results are given below. * 1-WL, 2-WL and 2-IGN cannot perform matching-count of any connected substructures with 3 or more nodes. However, they can perform containment-count of star-shaped substructures. * k-WL and k-IGN is able to perform both matching-count and containment-count of patterns of k nodes. Besides, running T iterations of k-WL cannot perform matching-count of any path substructure of (k+1)2^T or more nodes. Although more general results for k-WL are expected and the work does not devise a novel GNN architecture equipped with substructure counting, it provides a solid foundation to measure the expressive power of GNN by substructure counting. Indeed, the high-order GNNs discussed earlier can enumerate certain substructures <cit.>; however, due to challenges posed by large k values and the incompetency to address more intricate substructures, they turn out to be impractical. We then focus on introducing GNNs that leverage information garnered from substructure counting to improve their expressive capabilities. For instance, Bouritsas et al. <cit.> integrate substructure counting into node and edge features, deriving structural attributes by tallying specific substructures. Their Graph Substructure Networks (GSN) exhibit greater expressiveness than 1-WL, effectively distinguishing non-isomorphic graphs beyond the capabilities of 3-WL. Furthermore, they establish sufficient conditions for universality. Barceló et al. <cit.> extend the work of Bouritsas et al. <cit.> by performing homomorphism counting of substructures. Horn et al. <cit.> employ graph filtration to capture the emergence and disappearance of specific substructures. Toenshoff et al. <cit.> implement random walks to detect small substructures. Bodnar et al. investigate substructure counting on simplex<cit.> and regular cell complexes <cit.> with lifting transformation. However, it is noteworthy that many of the aforementioned studies focus on specific types of substructures and the selection of substructures is often manual and heuristic, which presents challenges in adapting substructure-based GNNs to real-world applications. Besides, we highlight two GNN architectures based on subgraphs in advance for their superior performance in counting substructures. Huang et al. <cit.> propose a GNN architecture where each node-based subgraph is augmented with a pair of node identifiers. This enables the GNN to count all cycles of length up to 6. More Recently, Tahmaesebi et al. <cit.> introduce the Recursive Neighborhood Pooling Graph Neural Network (RNP-GNN), which performs recursive pooling on the node-based subgraphs for each node using node marking and neighborhood intersection technique. The authors provide theoretical proof that for any set of substructures, there exists an RNP-GNN that can count them. Recently, Zhou et al. <cit.> propose Distance-Restricted FWL GNNs. By restricting the node pairs of 2-FWL to be only those with distance less than 2, their GNN can provably count up to 6 cycles with the best known complexity. Distance based GNN. It is observed that WL algorithm neglects the distance information for its neighborhood aggregation scheme, thus some works combine WL algorithm or GNN with distance information to enhance the expressive power. Zhang and Chen <cit.> first leverage the shortest path distances between target nodes and other nodes to enhance the link prediction performance of GNNs in their SEAL algorithm. Li et al. <cit.> generalize it into distance encoding (DE) defined by random walks to learn structural representation and overcome the limitation of 1-WL. Specifically, they utilize the shortest path distance and generalized PageRank Scores <cit.> as the measurement of DE, which are further served as extra features or controllers of message aggregation to devise powerful GNN architecture named DE-GNN and DEA-GNN. In addition, the k-hop MPNN <cit.> that aggregates the embedding within k-hop neighborhood of each node simultaneously can be viewed as another type of GNN using the distance information. Feng et al. <cit.> first analyze the expressive power of the k-hop MPNN and derive the following theorem. A k-hop MPNN with suitable parameters is strictly more powerful than 1-hop MPNN when k>1 while the expressive power of a k-hop MPNN is bounded by 3-WL. Further, they improve the expressive power of k-hop MPNN by equipping the message passing with peripheral subgraph information. The proposed KP-GNN is proven to be capable of distinguishing almost all regular graphs with a proper k. Recently, Zhang and Luo <cit.> introduced a novel class of expressive power metrics via graph biconnectivity and show that most existing GNN architectures fail to solve biconnectivity problem. Therefore, they propose a principled and efficient algorithm called the Generalized Distance Weisfeiler-Lehman (GD-WL) to solve the problem. The core step in the algorithm is given as follows. c_v^(l)←HASH((d_uv^(l-1), c_u^(l-1)):u∈ V), where d_uv is an arbitrary distance metric. They further propose two appropriate distance metrics to enable the GD-WL to solve all biconnectivity problems. The main results are shown in Theorem  <ref>. SPD (shortest path distance)-WL is fully expressive for edge-biconnectivity and RD (resistance distance)-WL is fully expressive for vertex-biconnectivity. When using both SPD and RD, the obtained Generalized Distance WL Algorithm(GD-WL) is fully expressive for both edge-biconnectivity and vertex-biconnectivity. Furthermore, the authors compare the SPD-WL and RD-WL to existing WL hierarchy and prove that the expressive power of SPD-WL and RD-WL is bounded by 2-FWL(3-WL), which also indicates that 2-FWL(3-WL) is fully expressive for both edge-biconnectivity and vertex-biconnectivity. Graph spectral based GNN. There are some works incorporating the spectral information into GNNs and the approach is proven to easily break the limitation of 1-WL. Balcilar <cit.> analyze the expressive power of GNN from the a spectral perspective and prove that most MPNNs act as low-pass filters which limit their expressive power. To break the limitation and further consider expressive power in terms of other perspectives such as graph isomorphism test and substructure counting, they resort to matrix language (MATLANG) proposed by Geert <cit.> and design graph convolution supports in spectral domain. Besides, Feldman et al. <cit.> utilize spectral feature based on graph Laplacian in an additional pre-coloring phase to improve the expressive power of GNN. Wang and Zhang <cit.> prove that the expressive power of a wide range of spectral GNNs based on k-degree polynomial filters is bounded by k+1 iterations of WL. Besides, it is worth noting that the spectral information can also be utilized to boost the expressive power of graph transformers <cit.> that we will discuss later. §.§.§ Subgraph GNNs Motivated by the observation that non-isomorphic graphs always have non-isomorphic subgraphs, the subgraph GNNs have been popular recently for its solid theoretical guarantee as well as flexible design. Typically, the subgraph GNNs can be categorized into node-based and edge-based. The node-based subgraph GNNs are more common in practice while edge-based subgraph GNNs compute the representation of edge-node pairs additionally. Roughly, they implement three steps: subgraph extraction by a specific policy, message passing to obtain individual representation and graph pooling. Following the procedure, existing works propose subgraph GNNs with a variety of architectures. Cotta et al. <cit.> adopt node removal to generate subgraphs based on graph reconstruction conjecture. Papp and Wattenhofer,  <cit.> prove that node-marking is a more expressive approach than intuitive node removal. Bevilacqua et al. <cit.> represent each graph as a set of subgraphs and study four simple but effective subgraph selection policies: node-deleted subgraphs, edge-deleted subgraphs, and two corresponding variants of ego-networks. Wijesinghe and Wang <cit.> incorporate the local structure captured by overlap subgraphs into a message passing scheme to obtain GNN architecture that is more powerful than 1-WL. You et al. <cit.> add identity information to the center node of each subgraph to break the symmetry while Huang et al. <cit.> implement pairs of node identifiers assigned to the center node and one of its neighborhoods. Zhang et al. <cit.> perform message passing on rooted subgraphs around each node instead of the rooted subtree. Similar to Zhang's work, Zhao et al. <cit.> utilize a base GNN as a kernel to encode the star subgraph of each node to generate multiple subgraph-node embeddings. Thiede et al. <cit.> consider an automorphism group of subgraphs to construct expressive GNNs. Besides the various methods to design powerful subgraph GNNs, there are some works that focus on analyzing the expressive power of subgraph GNNs. Frasca et al. <cit.> provides an extensive analysis of the expressive power of node-based subgraph GNNs from the perspective of symmetry. They observe that the node-based policies define a bijection between nodes and subgraphs thus the expressive power of node-based subgraphs GNNs can be characterized by one single permutation group acting jointly on nodes and subgraphs while previous works often consider two permutation groups defined on nodes and subgraphs separately. Besides, The symmetry structure described by the new permutation group is highly consistent with that of IGN. With this observation, they bound the expressive power of node-based subgraphs GNNs as follows. The 3-IGN can implement the node-based policies of subgraph GNNs thus the expressive power of node-based subgraph GNNs is bounded by 3-IGN which is proved to be as powerful as 2-FWL(3-WL). Qian et al. <cit.> introduce a unified theoretical framework for studying various designs of subgraph GNNs via a new variant of 1-WL called k-ordered subgraph WL (k-OSWL). The implementation of k-OSWL is similar to a procedure that performs 1-WL on each of the k-ordered subgraphs and then aggregates all the k-ordered subgraphs to compute the embedding for each node. They analyze the expressive power of the proposed k-OSWL and compare it with the original k-WL. The expressive power of k-OSANs is bounded by (k+1)-WL but it is incomparable to k-WL. Besides, (k+1)-OSANs is strictly more powerful than k-OSANs, which forms a hierarchy. Although the expressive power of k-OSANs is incomparable to WL algorithm with the same order, the hierarchy of k-OSANs suggests that increasing the size of the subgraphs can boost the expressive power. Zhou et al. <cit.> further generalize k-OSWL's running 1-WL on k-ordered subgraph to k,l-WL, which runs k-WL on l-ordered subgraph. They characterize the k,l-WL hierarchy by comparing it with k-WL, and prove that k,l-WL is less expressive than k+l-WL. Zhang et al. <cit.> provide a systematic characterization of the expressive power of node-based subgraph GNNs via a new version of WL hierarchy called Subgraph WL (SWL). Specifically, they categorize any node-based subgraph GNN into one of six equivalence classes according to different strategies used in subgraph generation, equivariant message passing, and final pooling. Among these six equivalence classes, they prove that the node marking SSWL using both the local aggregation and vertex-subgraph pooling achieves the maximal expressive power. To relate the proposed SWL to the existing WL hierarchy and provide a more precise hierarchy, they introduce a localized version of FWL algorithms and utilize the pebbling game framework <cit.> to compare the expressive power of different algorithms. With this framework, they make a strict comparison between different equivalence classes in the SWL and derive a tight expressive power upper bound to the localized FWL and FWL. It is noted that most of the above-mentioned methods and theories only consider the node-based subgraph GNN while the counterparts of the other type of subgraph GNNs, i.e. edge-based subgraph GNNs are rarely explored. Therefore, characterizing the expressive power of edge-based subgraph GNNs and further revealing the relation between the two types of subgraph GNNs may be a possible direction for future study. §.§.§ Non-equivariant GNNs Resorting to some non-equivariant operations can directly break the symmetry of MPNN, thus enhancing the expressive power of GNNs to go beyond 1-WL. For instance, the relational pooling <cit.> inspired by joint exchangeability <cit.> is inherently permutation-invariant for taking an average of all permutations on a graph. Formally, the relational pooling obtains the embedding of the graph 𝒢 with an arbitrary function f as follows. f(𝐀,𝐗)=1/n!∑_σ∈ S_n f(σ·𝐀,σ·𝐗), where σ is the permutation defined on the symmetric group S_n. To improve the expressive power of GNN with the relational pooling, the authors attach each node a permutation-sensitive identifier thus making the method non-equivariant, which can formulated as concatenating a one-hot encoding to the feature. The derived novel GNN architecture called RP-GNN is defined as follows. f(𝐀,𝐗)=1/n!∑_σ∈ S_n f(𝐀,[𝐗,σ·𝐈_n]), where 𝐈_n ∈ℝ^n × n is the identity matrix and we omit the permutation acting on the graph since the GNN is permutation-invariant. Based on the the Equation <ref>, the authors further prove that the RP-GNN is strictly more powerful than the original GNN in terms of distinguishing non-isomorphic graphs, which provides a practical tool to boost the expressive power GNN. Therefore, equipping the GIN with the relational pooling can easily derive a GNN that is more powerful than 1-WL. In addition, the local relational pooling is able to help GNNs count triangles and 3-stars empirically <cit.>. Besides the relational pooling, there are some other intuitive non-equivariant techniques to increase the expressive power of GNN. Papp et al. <cit.> utilize dropout techniques to remove a certain proportion of nodes during the train and test phase. Sato et al. <cit.> and Abbound et al. <cit.> add random features drawn from a standard uniform distribution to the initialization of node features. Sato et al. <cit.> introduces port numbering that is widely used in distributed local algorithms to GNN which we will discuss in the next subsection. Those non-equivariant techniques are easy to implement but their performance cannot always be guaranteed since they do not preserve the permutation equivariance property of GNNs. §.§ Connect GNN with combinatorial problems Besides the graph isomorphism problem, GNNs have been used to solve some NP-hard combinatorial problems in recent years, including minimum dominating set problem and minimum vertex cover problem <cit.>. Since those problems cannot be solved in polynomial time concerning the input size if we assume that P≠ NP, the GNNs are merely able to provide sub-optimal solutions with certain approximation ratios. Therefore, it is also feasible to analyze the expressive power of GNNs by the approximation ratio that they can achieve for those combinatorial problems <cit.>. To better reveal the role of GNNs in combinatorial problems, Sato et al. <cit.> connect GNNs to distributed local algorithm <cit.> that is efficient in solving combinatorial problems and specify two classes of GNNs: multiset-broadcasting GNN (MB-GNN) and set-broadcasting GNN (SB-GNN) as follows. x_v^(k) = f({{x_u^(k-1)|u∈𝒩(v)}}), (MB-GNN) , x_v^(k) = f({x_u^(k-1)|u∈𝒩(v)}), (SB-GNN). According to Definition <ref>, the MB-GNN corresponds to the MPNN while SB-GNN is a special class of MB-GNN that restricts the aggregated embeddings to be a set. To break the symmetry of message passing, the authors introduce port numbering that is widely used in distributed algorithms to GNNs that enables the GNN to send different messages to different neighbors, which obtains a new class of GNN named vector-vector consistent GNN(VV_C-GNN) that is strictly more powerful than previous MB-GNNs. The VV_C-GNNs updates the node feature as follows. x_v^(k) = f(p(u,v),p(v,u),x_u^(k-1)|u∈𝒩(v)) (VV_C-GNN), where p(v,u) is the port number of v that edge (v,u) connects to. Further, the authors propose the most powerful VV_C-GNN called consistent port numbering GNNs(CPNGNNs) that aggregate the features by concatenation. The theorem given as follows demonstrates the performance of CPNGNNs in solving combinatorial problems. CPNGNNs can achieve at most Δ+1-approximation for the minimum dominating set problem and at most 2-approximation for the minimum vertex cover problem where Δ is the maximum degree in the input graph. Although the derived approximation ratio is far from optimal, it can be further improved by additional information about the graph. Later, Sato et al. <cit.> proposed a simple but efficient technique to boost the expressive power of GNN by concatenating a random feature sampled i.i.d. form a uniform distribution to the initial feature. Equipped With this slight modification, the authors prove that the GIN can achieve a near-optimal approximation ratio for the minimum dominating set problem and minimum vertex cover problem. §.§ Approximation ability of GNN Having explored the expressive capabilities of GNNs across diverse graph-related tasks thus far, this subsection discusses the approximation theory—an essential framework for describing expressive power within deep learning  <cit.>. Specifically, our attention shifts towards the graph functions that GNNs can effectively approximate to by analyzing current approximation results. Additionally, we illustrate the close relationship between the approximation ability and the ability to distinguish non-isomorphic graphs. Since the graph embedding is always assumed to be invariant to the permutation of nodes, it is natural to ask whether a GNN can approximate any invariant functions to evaluate its expressive power. From this point of view, Maron et al. <cit.> first analyze the expressive power of G-invariant networks formulated in Equation <ref> that are networks with invariant or equivariant linear with respect to arbitrary subgroup G of a symmetric group S_n, in terms of approximating any continuous invariant functions. Formally, the universality of G-invariant networks is given as follows. Let f:ℝ^n→ℝ be a continuous G-invariant function that satisfies f(σ· x) = f(x) for all x∈ℝ^n and σ∈ G≤ S_n, and K⊂ℝ^n a compact set. Then there exists a G-invariant network that can approximate f to an arbitrary precision. The Theorem <ref> indicates that any continuous G-invariant function can be approximated by a G-invariant neural network to an arbitrary precision. In addition, they show that the upper bound of the tensor order for G-invariant polynomial to achieve the universal approximation ability is n(n-1)/2. Since the upper bound is unfeasible for the expensive computational cost, they provide a practical lower bound of the order that is n-1/2 for universality. Alternatively, Keriven and Peyre<cit.> provide a proof of the same result by retaining to a one-layer but similar architecture of Equation <ref> and further extend the result to equivariant case for the first time. Unlike Maron's work consider a fixed n when analyzing universality, they prove that the GNNs with a single set of parameters can approximate any continuous invariant function uniformly well as long as the graph size is bounded by a specific parameter. Furthermore, Barceló et al. <cit.> derive the completely uniform results that are independent of graph size through the lens of logical classifiers. Similarly, Grohe <cit.> provides a precise characterization of the the graph functions that can be computed by a class of polynomial-size bounded depth GNNs via Boolean circuit complexity. In addition, there are some techniques to achieve universality. For instance, Abbound et al. <cit.> prove that the random initialization feature can help MPNN approximate any invariant functions defined on graphs with high probability, which is the first universality result for the MPNN. Besides the universal approximation, Loukas <cit.> consider the expressive power of MPNN with respect to Turing universality, which refers to the ability to compute any function that is computable by a Turing machine with the same input. Compared to the universal approximation, the Turing universality is strictly stronger and able to solve graph isomorphism problems. To obtain sufficient conditions for the MPNN to achieve the Turing universality, the author proves that the MPNN is equivalent to the LOCAL model in the distributed algorithm which is a well-studied Turing universal model. With the established equivalence, the sufficient conditions for MPNN to achieve Turing universality are given as follows: * The GNN should be sufficiently wide and deep. * The functions applied in each layer should be sufficiently expressive. * The nodes can uniquely identify each other. It is noted that the last condition can partly account for the effectiveness of unique node identifiers and other approaches that break the symmetry adopted in the previous subsection. Finally, we dip into works that bridge the gap between graph isomorphism testing and function approximation, two primary lenses for evaluating the expressive capabilities of GNNs. Chen et al. <cit.> establishes the theoretical equivalence between these perspectives by introducing GIso-discriminating, a novel concept that extends the discrete graph isomorphism problem to function approximation within a continuous input space. Moreover, they propose a structured framework using sigma-algebra terminology to systematically compare the expressive capacities of various models. Azizian et al. <cit.> also delves into expressive power through these two viewpoints, focusing on three categories of GNNs: MPNN, linear GNN and folklore GNN, encompassing both invariant and equivariant instances. Their analysis not only reaffirms previous universality findings but also identifies the k-folklore GNN as the most powerful among the three architectures, capable of approximating any continuous invariant function but less powerful than (k+1)-WL. §.§ Discussion In this section, we have reviewed the theory of the expressive power of GNNs from multiple perspectives and categorized the methods to devise GNN architectures that are more powerful than 1-WL. Since the hierarchy of the WL algorithm for graph isomorphism problem is the mainstream measurement to characterize the expressive power of different GNN models, we summarize the expressive power of existing GNN architectures in terms of the WL hierarchy and the corresponding techniques in Table <ref>. After that, we spotlight and discuss four possible future directions on the theory of the expressive power of GNNs. Break the limitation of WL algorithm. Although the hierarchy of WL algorithm has been prevalent for characterizing the expressive power of GNN in the past few years, the limitations of it has drawn more and more attention recently. On the one hand, the WL algorithm fails to measure the degree of similarity between non-isomorphic graphs due to the binary output for the graph isomorphism problem. Therefore, a more fine-grained metric for expressive power is expected. On the other hand, the WL algorithm is suspected of having the ability to represent the true expressive power of GNN model as it is demonstrated empirically that the more expressive GNN with respect to WL hierarchy do not necessarily have better performance on real world tasks <cit.>. As mentioned before, the WL algorithm neglects some graph properties such as distance between nodes, and thus can leave out potentially important structural information, which make WL algorithm not suitable for other interested graph-related tasks in real world. Hence, a metric with practical value is desirable. However, both of these above-mentioned two points are very challenging from the perspective of theory since it requires a transition from qualitative to quantitative and the knowledge of graph theory should be considered to grasp the essential expressive power across multifarious tasks and graphs. There are still some notable attempts. To derive a more fine-grained metric, approximate isomorphism <cit.> that quantifies the similarity by some graph distance metric is put forward and Boker et al. <cit.> propose continuous extension of both 1-WL and MPNNs to graphons via evaluating a specific graph metrics on graphons, which is capable of subtly representing the ability to capture the similar graph structure. Notably, Zhang el al. <cit.> derive a novel expressive measure termed homomorphism expressivity based on substructure counting under homomorphism, which provides a quantitative and practical framework to solve both two issues. Furthermore, the proposed novel expressive power hierarchy is closed related to the polynomial expressive power hierarchy proposed by Puny et al. <cit.> that also mitigates the limitation of WL hierarchy. Expressive power for node classification and link prediction. Due to the dominant WL algorithm for graph isomorphism problem, the majority of introduced works focus on the graph classification task. While node classification and link prediction are two fundamental tasks in graph learning, it is meaningful to analyze the expressive power of GNNs with respect to them in theory. A direct thought inspired by WL algorithm is to distinguish nodes and potential links that can not be determined by the 1-WL algorithm, which can be resolved through techniques discussed previously such as node-based subgraph extraction and node identifier. To characterize the expressive power of GNNs for node classification and link prediction tasks further, one can derive a novel version of WL. For example, Barcelo et al. <cit.> propose the relational WL algorithm to study the expressive power of architectures for link prediction over knowledge graphs. Later, Huang et al. <cit.> generalized the results to a wider range of models and designed a framework called C-MPNN whose expressive power can be characterized by both the relational WL algorithm and first-order logic. Hu et al. <cit.> discusses the link expressive power of a series of GNNs based 2-WL, 2-FWL and their local variants. Zhang et al. <cit.> reveal the fundamental expressivity limitation of combining two node representations obtained by a GNN as a link representation and propose a labeling trick to enhance GNN's link expressive power. For the node classification task, the similarity between different nodes should be taken into consideration and a phenomenon called over-smoothing that the feature of nodes become indistinguishable in deep GNNs has gained much attention. We will introduce it in detail in Section <ref>. Expressive power of graph transformer. The graph transformer is a popular topic in graph representation learning in past few years for producing SOTA results on several graph benchmarks, especially the tasks requiring long-range dependency. To delve into the great success of graph transformers, researchers have studied the expressive power of graph transformers extensively. One attempt is to establish the connection between graph transformers and MPNN. Kim et al. <cit.> prove that the graph transformer with appropriate positional encoding can approximate any linear permutation-equivariant operators and thus is not less powerful than k-IGN and k-WL and strictly more powerful than MPNN. Conversely, Cai et al. <cit.> show that by augmenting the input graph with a virtual node connecting to all graph nodes, MPNN can simulate certain type of graph transformer under mild assumptions. Although the connection demonstrates the powerful expressivity of graph transformer, the general transformer architecture does not have an advantage over GNN architecture in terms of expressive power since it is permutation-equivariant and thus fails to distinguish nodes with different positions. More specifically, Zhou et al. <cit.> prove that the k-order graph transformers operating on k-order tensors without the positional encoding to structural information are strictly less expressive than k-WL. Therefore, more efforts are made to study and utilize the additional expressive power brought by positional encoding and structural encoding. For instance, Black et al. <cit.> compare the expressive power of Absolute Positional Encoding(APE) and Relative Positional Encoding(RPE) in terms of distinguishing non-isomorphic graphs by introducing a variant of WL algorithm. With this framework, the authors prove that the two types of positional encoding are equivalent in distinguishing non-isomorphic graphs and further provide an approach to convert the positional encoding to each other while maintaining the expressive power. Similarly, Zhu et al. <cit.> also propose a novel WL algorithm named SEG-WL based on structural encoding to characterize the expressive power of graph transformers. One notable position encoding is the eigenfunction of Laplacian. Equipped with the positional encoding, the graph transformers can go beyond 1-WL <cit.> and further proved to be universal <cit.>. However, such positional encodings are not permutation-equivariant thus recent works attempt to design equivariant Laplacian positional encoding <cit.>. Expressive power of GNNs on specific graphs. Besides the common graph represented by (V,E,X), many graphs arising in real applications have additional properties and constraints thus corresponding GNN architectures are proposed to handle the tasks on them. To provide a theoretical guarantee for applying the architectures on real-world tasks and further improve the performance, the characterization of expressive power of GNNs on the specific graphs is instrumental since the original WL hierarchy does not take the distinction of specific graphs into account. For geometric graphs that are widely used to represent a 3D atomic system, Joshi et al. <cit.> propose a geometric version of WL(GWL) by considering geometric graph isomorphism that requires underlying graphs to be not only topologically isomorphic but also equivalent concerning some symmetry groups of permutation, translation, rotation, and reflection. Using the framework, the authors analyze the impact of some key factors of geometric GNNs including depth, tensor order, and body order on the expressive power and further derive the equivalence between the geometric graph isomorphism test and universal approximation ability of geometric GNNs <cit.>. Different from Joshi's work, Beddar et al. <cit.> extend the 1-WL to attribute and dynamic graphs and further establish the connection between the novel version of WL algorithm and unfolding trees. Besides, the expressive power of GNNs on relational graphs has been discussed before. In addition, it is observed that existing works often try to generalize the WL framework to specific graphs, which may not be suitable for the real application as mentioned before. Therefore, generalization of other existing measurements such as subgraph counting or characterization via distinct tasks on corresponding graphs is worth exploring. § GENERALIZATION Generalization refers to the ability of a hypothesis or a learning algorithm to work well on unseen data, which is one of the most critical perspectives of machine learning algorithms. To quantitatively analyze the generalization property, the generalization (error) bound provides a theoretical guarantee and has drawn much attention in deep learning. In this paper, we focus on the generalization bound on graph learning and provide a systematic analysis. Specifically, we consider the generalization bound of GNNs for graph classification task and node classification task. Although the dependent and unstructured property of graph data and the complex design of graph neural networks pose difficulty for deriving the generalization bound of GNNs precisely, the framework and methods developed in deep learning theory can facilitate the computation and provide insightful generalization bound with some assumptions and simplification. Consistent with the conventions in deep learning, we classify the literature into four groups: the complexity of hypothesis space-based, PAC-Bayes based, stability-based, and graph neural tangent kernel (GNTK) based, with GNTK serving as an extension of the neural tangent kernel (NTK). Notably, the primary disparity lies in how the generalization bound on graphs integrates the statistical characteristics of graphs and the learning matrix of Graph Neural Networks (GNNs). Subsequent subsections will detail these categories individually. §.§ Notations and problem formulation Before delving into the concrete methods to derive generalization bounds of GNN, we first introduce some necessary notations and provide the formulation of the problem. Consider the training dataset Z={(x_1,y_1),(x_2,y_x)…,(x_m,y_m)} with m samples where x_i∈𝒳 is the feature and y_i∈𝒴 is the label. All (x_i,y_i) are i.i.d. observed from an underlying distribution 𝒟 over 𝒳×𝒴. Then the learning algorithm A attempts to learn hypothesis h:𝒳→𝒴 from the training dataset and the hypothesis space ℋ consists of all possible hypothesis h. For any hypothesis h learned by an algorithm A from the training dataset Z, the empirical risk ℛ(h) and expected risk ℛ(h) with respect a loss function ℓ are defined respectively as follows, ℛ_Z(h)=1/m∑_i=1^mℓ(h,(x_i,y_i)),ℛ(h)=𝔼_(x,y)∼𝒟ℓ(h,(x,y)). In addition, when algorithm A is a randomized algorithm, we consider the hypothesis A_Z learned by A from the training dataset Z and compute the expectation of the empirical risk ℛ(A_Z) and expected risk ℛ(A_Z) with respect to the randomness introduced by A respectively as follows, ℛ_S(A)=𝔼_A[ℛ_S(A_Z)],ℛ(A)=𝔼_A[ℛ(A_Z)]. Then, the generalization bound can be analyzed by the generalization gap ℛ(h)-ℛ(h). Besides, it is noted that we only consider the in-distribution generalization bound since it is the common assumption for most of the introduced methods and we will further cover the out-of-distribution generalization in the discussion. §.§ Complexity of hypothesis space based Given that the learning algorithm is situated within a hypothesis space, the complexity of this space plays a crucial role in defining the range of problems the algorithm can address. Therefore, it is common practice to analyze the complexity of the hypothesis space in order to establish a generalization bound, which can be assessed using theoretical measures such as Vapnik-Chervonenkis dimension (VC-dim)  <cit.> and Rademacher complexity <cit.>, along with covering numbers <cit.>. Here we only introduce the first two methods that have been used to obtain the generalization bound of GNN as the covering number often acts as an alternative of Rademacher complexity and is applied to more complex settings in the theory of deep learning. Note that the VC-dim <cit.> for binary classification task is defined upon growth function. Therefore, we will first introduce the definition of the growth function before discussing the generalization guarantee associated with VC dimension. For any non-negative integer m, the growth function of hypothesis space ℋ is defined as follows: Π_ℋ(m):=max_x_1,…,x_m∈𝒳|{h(x_1),…,h(x_m):h∈ℋ}|. The growth function represents the maximum number of possible labeling of m data points by ℋ. If Π_ℋ(m)=2^m which means any labeling of m data points can be perfectly classified by one hypothesis in the hypothesis space, we say ℋ shatters the dataset {x_1,…,x_m∈𝒳}, and the VC-dim of ℋ is defined as the largest m. From the above definition, the VC dim is independent of the data distribution thus a more universal approach. The generalization bound based on VC-dim can be obtained by the following theorem. Assume hypothesis space ℋ has VC-dim D, m is the training set size. Then, for any δ>0, with probability 1-δ, the following inequality holds for any h∈ℋ, ℛ(h)≤ℛ(h)+√(2Dlogem/D/m)+√(log1/δ/2m). Bounding the VC-dim by O(p^4N^2) where p is the number of parameters in GNN and N is the size of input nodes, Scarselli <cit.> derives generalization bounds of node classification task on graphs for the first time. The result suggests that generalization ability improves with the increasing number of nodes and parameters. Besides, it is worth noting that the bound is identical to that of recurrent neural networks. In contrast to VC-dimension, Rademacher complexity  <cit.> takes into account the distribution of data and provides a more nuanced assessment of the richness of the hypothesis space by quantifying how well the hypothesis set can accommodate random noise. The empirical definition of Rademacher complexity is outlined below. Given a function class ℋ and a dataset Z with m samples Z={x_1,…,x_m}, the empirical Rademacher complexity of ℋ can be defined by: ℜ̂_m(ℋ) = 𝔼_χ[sup_h∈ℋ1/m∑_i=1^mχ_ih(x_i)], where χ={χ_1,…,χ_m} is a random vector whose items are uniformly chosen form {-1,+1} and samples are i.i.d. generated from a distribution 𝒟. Further, the Rademacher complexity of ℋ is defined as ℜ_m(ℋ) = 𝔼_S∼𝒟^mℜ_m(ℋ). The (empirical) Rademacher complexity can deduce the generalization bound on binary classification and regression tasks according to the following theorem. Given a function class ℋ containing functions h:𝒳→[a,b] and a dataset S with m samples S={x_1,…,x_m}. Then for any δ>0 and h ∈ℋ, with probability 1-δ, we have, 𝔼[h(x)]≤1/m∑_i=1^mh(x_i)+2ℜ_m(ℋ)+(b-a)√(ln1/δ/2m), 𝔼[h(x)]≤1/m∑_i=1^mh(x_i)+2ℜ_m(ℋ)+3(b-a)√(ln2/δ/2m). In order to apply the theorem to the graph classification task that calculates the average of binary prediction on each node to obtain the graph label, Garg et al. <cit.> bound the Rademacher complexity of GNN by considering the Rademacher complexity of the computation tree of each node whose update formula of GNN follows a mean field form: x_v^l=ϕ(𝐖_1x_v+𝐖_2ρ (∑_u∈𝒩(v)g(x_u^l-1))). To illustrate the varying generalization bound with respect to the different parameters, they define a combination parameter 𝒞=C_ρC_gC_ϕB_𝐖 that is the product of the Lipschitz constant of ρ,g,ϕ and B_𝐖 denotes the bound norm of weight 𝐖_2. Let d denote the maximum degree of nodes in the graph, r denote the dimension of embedding, L denote the number of layers, m denote the size of training nodes and γ denote margin parameter in the loss, the dependency can be expressed as follows: O(rd/√(m)γ) for 𝒞<1/d O(rdL/√(m)γ) for 𝒞=1/d O(rd√(rL)/√(m)γ) for 𝒞>1/d The generalization bound given by Equation (<ref>) is much tighter than the counterpart of VC-dim since the latter has a higher order dependency on the size of parameters of the neural network and the size of input N is at least the maximum degree d. In addition, they find that the bound for GNN in the equation is comparable to that of RNN, which indicates that GNN can be seen as the sequentialized RNN. It is noticed that the observation is consistent with that of the generalization bound obtained by VC-dim. Besides, Lv <cit.> also proves a generalization bound by Rademacher complexity of GCN with one hidden layer on the node classification task. The derived bound is sharp for having a lower bound matching the upper bound. In contrast to above-mentioned works which focus on generalization bounds for GNNs in the inductive setting, there is a growing body of research examining generalization bounds in the more realistic transductive setting. These studies typically leverage transductive Rademacher complexity <cit.> to derive generalization bound, which considers unobserved samples when deriving generalization bounds. Formally, denote the size of training set and test set as m and u respectively, the target is to learn a function that generates the best predictions for the label of the test set based on the features for both training and test set and the labels for the training set. Within the transductive framework, Oono and Suzuki <cit.> establish a generalization bound for multi-scale GNNs in node classification tasks, while Esser et al. <cit.> refine the bound for multi-layer GNNs within a planted model to investigate graph and feature alignment. In addition, Deng et al. <cit.> provide a generalization upper bound to guarantee to performance of GNN based recommendation systems. Recently, Tang and Liu <cit.> introduce a high probability generalization bound for GNNs trained with SGD in the transductive setting, accounting for the comparable performance of shallow and deep models, as well as the effectiveness of techniques like early stopping and dropedge to some extent. §.§ PAC-Bayes based Over recent years, the probably approximately correct (PAC)-Bayesian approach <cit.> has garnered significant attention for providing a more realistic and tighter generalization bound compared to traditional VC-dimension-based and Rademacher complexity-based bounds. Originally introduced to measure the learnability of a problem, the PAC concept <cit.> assesses whether a learning algorithm can output the correct result with high probability when given a specific number of training examples from an unknown distribution. Since the bounds provided by classical PAC framework is unsatisfactory due to the large hypothesis space, the PAC-Bayesian approach incorporates the Bayesian view by putting a prior distribution over the hypothesis space to derive tighter generalization bounds for the target machine learning model. In the following discussion, we will present the framework developed by PAC-Bayesian theory for analyzing the generalization bound of GNNs. In this context, we focus on the margin bound, utilizing a multi-class margin loss function with a threshold γ utilized. The original empirical loss is defined as: ℓ_Z,γ(h)=1/m∑_i=1^m1[h(x_i)[y_i]≤max_j≠ y_ih(x_i)[j]+γ], where Z is the training set with m examples. Further, the original generalization loss is given as: ℓ_𝒟,γ(h)=Pr_x∼𝒟[h(x)[y]≤max_j≠ yh(x)[j]+γ], where 𝒟 is an unknown distribution from which samples are generated. Assume there is a prior distribution 𝒫 and a posterior distribution 𝒬 over the model parameter θ in Bayesian theory, then the empirical loss and generalization loss are defined by expectation which are denoted as 𝔼_θ∼𝒬[ℛ_γ(h(θ))] and 𝔼_θ∼𝒬[ℛ_γ(h(θ))] respectively. The generalization bound of the model based on PAC-Bayesian can be obtained via the following theorem.  <cit.>: Let 𝒫 and 𝒬 be the prior distribution and posterior distribution over the model with parameter θ, and Z be a dataset with m samples generated i.i.d. from distribution 𝒟. Then, for any δ∈ (0,1) and h ∈ℋ, with probability 1-δ, we have 𝔼_θ∼𝒬[ℛ_γ(h(θ))] ≤𝔼_θ∼𝒬[ℛ_γ(h(θ))]+√(𝐊𝐋(𝒬||𝒫)+logm/δ/2(m-1)). In Theorem <ref>, the selection of distribution pairs 𝒫 and 𝒬 can be arbitrary. However, choosing disparate distributions may render the computation of KL-divergence challenging, while oversimplified choices could result in significant empirical and generalization losses. To address this issue, Neyshabur et al. <cit.> offer an effective approach to determine the bound by considering the posterior distribution over parameters as a perturbation distribution derived from two known distributions. The perturbation-based Bayesian generalization bound can be formally defined as follows: Let h∈ℋ:𝒳→ℝ^K be any model with parameter θ, S be the dataset with m samples that generated i.i.d. from distribution 𝒟, and 𝒫 be the prior distribution on the parameters that is independent of the training data. Then, for any γ,δ>0, any parameter θ and any random perturbation Δθ s.t. Pr_ϕ[max_x∈𝒳|h(θ+Δθ)-h(θ)|_∞< γ/4]> 1/2, with probability at least 1-δ, we have: ℓ_𝒟,0(h(θ)) ≤ℓ_Z,γ(h(θ))+√(2𝐊𝐋(Q(θ+Δθ)||𝒫)+log8m/δ/2(m-1)). As suggested by Theorem <ref>, if a model's output remains stable for any input even after a slight parameter perturbation with high probability, the generalization bound can be established by bounding the output change under perturbation. Inspired by this framework, Liao et al. <cit.> first apply PAC-Bayesian approach to GNNs and utilize the above theorem to derive the generalization bounds for GCNs and MPGNNs on graph classification tasks. The perturbation analysis involved controlling the maximum node representation and the maximum change of node representation. Here, we present the generalization bound for GCNs, with the prior distribution 𝒫 and the perturbation distribution modeled as a Gaussian distribution with zero mean and covariance matrix: For any B>0, L>1, d>1, r>1, let f∈ℋ:𝒳×𝒢→ℝ^K be a L-layer GCN and Z be a dataset with m samples that generated i.i.d. from distribution 𝒟. Then for any δ,γ>0, with probability at least 1-δ, we have, ℓ_𝒟,0(h(θ)) ≤ℓ_Z,γ(h(θ)) +𝒪(√(B^2d^L-1L^2rlog(Lr) 𝐖+logml/δ/γ^2m)), where 𝐖 =∏_i=1^l||𝐖_i||_2^2∑_i=1^l||𝐖_i||_F^2/||𝐖_i||_2^2, B is upper bound for the l_2 norm of the feature, L is the number of layers, d is the maximum node degree considering itself, r is the maximum hidden dimension, W_i is the weight matrix for the i-th layer. The bound implies a dependency on the maximum node degree d, maximum hidden dimension r, and spectral norm of weight matrix. Compared to the previous Rademacher complexity bound for MPNN, the PAC-Bayesian generalization bound is more tight with respect to the maximum node degree d and maximum hidden dimension r. Specifically, for maximum node degree the PAC-Bayesian bound scale as 𝒪(d^L-1) while Rademacher complexity bound scale as 𝒪(d^L-1√(log(d^2L-3))). For maximum hidden dimension the PAC-Bayesian bound scale as 𝒪(√(rlog r)) while Rademacher complexity bound scales as 𝒪(r√(log r)). The comparison of dependency on spectral norm of weight is inaccessible theoretically without knowing the actual value. It is also noticeable that the maximum node degree is the only graph statistics factor in the generalization bound, which indicates that the relationship between graph structure and generalization ability may not be fully explored. Following Liao's work, Sales <cit.> further improve the bound by reducing the factor of the exponential term of maximum node degree and utilizing a theorem on random matrix to bound the spectral norm more precisely. Ju et al. <cit.> delves into the relationship between the generalization bound and the graph diffusion matrix, which offers a more detailed representation of the graph structure. By refining the perturbation analysis using Hessians, they achieve a tighter and more precise bound that scales with the largest singular value of the diffusion matrix instead of the previous method based on the maximum node degree. In a separate study, Sun and Lin <cit.> apply the PAC-Bayesian framework to the adversarial robustness setting. They derive adversarially robust generalization bounds for both GCNs and MPNNs in graph classification tasks. This new bound eliminates the exponential dependency on the maximum node degree. §.§ Stability based Besides the perturbation analysis that exerts perturbation to the weight matrix in PAC-Bayesian approach to guarantee the generalization, it is intuitive that the performance of an algorithm with good generalization ability does not degrade much after a slight change in the training data. The stability is the measurement to quantify the change of the output of an algorithm when the training data is modified. Here we only introduce the uniform stability <cit.> that is most widely-used in deriving the generalization bound. Before giving the formal definition of the uniform stability, we introduce modification operation to the training data in advance. Data Modification <cit.> Let 𝒳 be the input space, 𝒴 be the output space and 𝒵=𝒳×𝒴. For x_i∈𝒳 and y_i∈𝒴⊂ℝ, let Z be a training set with m examples Z={z_1=(x_1,y_1),…,z_m=(x_m,y_m)} and all samples are i.i.d. from 𝒟. Two fundamental modifications to the training set Z are as follows: * Removing i^th data point in the set Z is represented as, Z^\ i=z_1,…,z_i-1,z_i+1,…,z_m, * Replacing i^th data point in the set Z is represented as, Z^i=z_1,…,z_i-1,z_i^',z_i+1,…,z_m. Then the uniform stability for a randomized algorithm can be defined as follows, Let A be a randomized algorithm trained on dataset S and A_Z is the output hypothesis, then A is β_m-uniformly stable with respect to a loss function ℓ, if it satisfies, sup_Z,z|𝔼_A[ℓ(A_Z,z)]-𝔼_A[ℓ(A_Z^\ i,z)]|≤β_m. The request for uniform stability is strict since it holds for every possible training set with m samples. In addition, a randomized algorithm is taken into consideration to analyze the model optimized by some randomized algorithm e.g. stochastic gradient descent (SGD). Then, the generalization gap based on the uniform stability can be derived by the following Theorem. Assume a uniformly stable randomized algorithm (A_Z,β_m) with a bound loss function 0≤ℓ(A_Z,z)≤ M for any Z,z. Then, for any δ>0, with probability 1-δ over choice of an i.i.d size-m training set Z, we have: 𝔼[ℛ(A_Z)-ℛ(A_Z)]≤2β_m+(4mβ_m+M)√(log1/δ/2m). With Theorem <ref>, the generalization bound can be obtained via proving the uniform stability of an algorithm. Besides, to ensure that the generalization gap can converge to 0, it requires β_m to decay faster than 𝒪(1/√(m)) as m→∞. Assuming the randomized algorithm to be a single-layer GCN optimized by SGD, Verma and Zhang <cit.> first analyze the uniform stability of GCN and derives the stability-based bound on the node classification task. They derive the uniform stability constant and corresponding generalization bound of the model, which are given in Theorem <ref> and Theorem <ref>, respectively. Let the loss and activation function be Lipschitz-continuous and smooth functions. Then a single layer GCN model training by SGD algorithm for T iterations is β_m-uniformly stable, where β_m ≤(ηα_ℓα_σ v_ℓ(λ_𝒢^max)^2 ∑_t=1^T(1+η v_ℓ v_σ(λ_𝒢^max)^2)^t-1) / m, where η>0 is the learning rate, α_ℓ,α_σ>0 are Lipschitz constants for the loss function and activation function respectively, v_ℓ,v_σ>0 are the Lipschitz constants for the gradient of loss function and activation function respectively, and λ_G^max is the largest absolute eigenvalue of the graph diffusion matrix. Let the loss and activation function be Lipschitz-continuous and smooth functions, then a single layer GCN model training by SGD algorithm for T iterations is β_m-uniformly stable. The generalization bound based on uniform stability is given as follows, 𝔼_SGD[ℛ(A_Z)-ℛ̂(A_Z)] ≤1/m𝒪((λ_𝒢^max)^2T) + (𝒪((λ_𝒢^max)^2T)+M)√(log1/δ/2m) The obtained uniform stability constant and generalization bound make sense since the largest absolute eigenvalue of graph diffusion matrix can be controlled by various normalization methods, which is in accord with the stable training behavior and better performance when adopting a normalized graph diffusion matrix. They also stress the importance of batch-normalization that has similar effect on the training of multi-layer GNNs. However, the bound is incomparable to those of previous for considering a randomized learning algorithm. Later, Zhou and Wang <cit.> extend the work to multi-layer GNNs and further demonstrate that increasing number of layers can enlarge the generalization gap. Different from Zhang <cit.>'s setting, Cong et al. <cit.> consider a transductive setting and study the generalization bound of multi-layer GCN optimized by full batch gradient descent on node classification task by transductive uniform stability. They delve into the Lipschitz continuity, smoothness, and gradient scale to compare the generalization bound of different models. §.§ GNTK based Neural tangent kernel (NTK) <cit.> is a kernel-based method to analyze over-parameterized neural networks trained by gradient descent in the infinite-width limit in deep learning. Du et al. <cit.> generalize the theory to graph learning via combining GNN with graph kernels. Specifically, they consider the graph classification task training on n graphs 𝒢={𝒢_1,𝒢_2,…,𝒢_n}. Let f(θ, 𝒢_i) be the output of the GNN parameterized by θ testing on graph 𝒢_i and F(t)=(f(θ, 𝒢_i))_i=1^n, then the training dynamics by gradient descent with infinitesimally small learning rate, i.e. dθ/dt=-∇ℓ(θ(t)) considering square loss function ℓ(θ)=1/2∑_i=1^n(f(θ,𝒢_i)-y_i)^2 follows the formula dF/dt=-𝐇(t)(F(t)-y), where 𝐇(t)_ij=<∂ f(θ(t),𝒢_i)/∂θ, ∂ f(θ(t),𝒢_j)/∂θ>. Since it is proved that for an over-parameterized neural network, the matrix 𝐇(t) is almost constant regardless of different t, the training process can be viewed as a kernel regression problem. To be further, the matrix 𝐇(0) can converge to a deterministic kernel matrix called NTK if the parameters are randomly initialized by Gaussian distribution. This property facilitates the analysis of the generalization bound of GNN as long as the GNN is converted to its GNTK. Denoted the kernel matrix as 𝐇. They provide the technique to perform the conversion and derive the generalization bound of a single-layer GNN via Rademacher complexity: Given n training data {(𝒢_i,y_i)}_i=1^n drawn i.i.d. from the underlying distribution 𝒟. Then for any loss function ℓ:ℝ×ℝ→ [0,1] that is 1-Lipschitz in the first argument such that ℓ(y,y)=0 and any δ>0, with probability at least 1-δ, the generalization loss of the GNTK predictor can be upper bounded by 𝔼_(G,y)∼𝒟 [ℓ(f_ker(𝒢),y)] ≤𝒪(√(y^T𝐇^-1 y· tr(𝐇))/n+√(log(1/ δ)/n)). It is observed that the generalization bound derived by GNTK depends on the label y and kernel matrix 𝐇̂ which is data-dependent and different from bounds obtained by other methods. The data-dependent generalization bound is directly related to training samples and thus reflects the property of the data generation process. To provide a more concrete bound, they further bound y^T𝐇^-1 y and tr(𝐇) respectively to demonstrate that the GNN can learn the corresponding class of graph labeling function with polynomial number of samples. This is the first sample complexity analysis with respect to the generalization bound of GNN. §.§ Discussion This section presents four main methods for analyzing the generalization bound of GNNs. These methods typically integrate analytical tools from deep learning theory with GNN architecture and graph structure to derive the generalization bounds. However, due to the limitations in the methods themselves and the simplified setting that is far from real applications such as GNNs with a single hidden layer, most of the bounds have an enormous gap compared to empirical results and provide little insight into the architecture design and training techniques. Furthermore, the derived bounds can contradict to empirical result to some degree. For example, the complexity of the hypothesis space bound increases as the number of parameters becomes larger and the stability-based bound grows with respect to the iterations of optimization. To handle the problems and further improve the generalization bound of GNNs, researchers can leverage recent advances in deep learning theory such as local Rademacher complexity <cit.>, marginal-likelihood PAC-Bayes <cit.> and ℋ-consistency <cit.>. Besides, it is observed that existing generalization bounds often heavily rely on the number of nodes and maximum node degree as the graph-related term in their final expressions, which is too coarse-grained to capture the complex graph structure information. Therefore, considering additional graph statistics in constructing these bounds could establish a closer link between the generalization bound and the underlying graph structure information, potentially enhancing the understanding of how graph characteristics influence generalization performance. It is also noted that existing research in this area can also be categorized based on the task addressed, with a focus on node classification and graph classification tasks. Table <ref> summarize the generalization error bound of GNNs with respect to the task and the method to derive the bound. Concretely, we only preserve the terms that are related to the input graphs in order to show its dependence on graph properties concisely. Link prediction tasks have been less explored due to challenges related to edge partitioning during training and the complexity of the optional prediction function involving two nodes, which violates the settings that samples are independent. Recent studies on the generalization bound of GNNs have shifted towards transductive learning. The transductive learning acknowledges the presence of unlabeled data, which mirrors real-world scenarios in graph-based learning and allows for the integration of optimization algorithms and training data size into the generalization bound. Intuitively, the transductive learning setting shrinks the hypothesis space by attempting to limit the hypothesis space to the space around optimal hypothesis, which provides a tighter and more practical bound compared to the common VC-dim based and Rademacher complexity generalization bounds that consider the complexity of the whole hypothesis space. Therefore, the transductive generalization gap, defined as the difference between training error and testing error, offers a clearer verification of results through experimentation. In addition, methods to derive generalization bounds established in the inductive setting have been extended to the transductive counterparts non-trivially in deep learning theory, such as transductive PAC-Bayesian <cit.> and transductive Rademacher complexity <cit.>, hold promise for analyzing the generalization bound of GNNs in transductive learning settings in the future. To deepen our understanding of GNNs and graphs, future research efforts could focus on establishing generalization bounds for GNNs considering the GNN architecture, optimization techniques, and specific graph structures. For GNN architecture, current works predominantly focus on standard architectures like GCN or MPNNs, often limited to single-layer models or fixed weight matrices that do not take the training process of GNNs into consideration. Future investigations could explore popular GNN designs such as attention mechanisms, skip connections with multiple layers, and learnable weight matrices. In terms of optimization algorithms, while some studies analyze generalization bounds using standard Stochastic Gradient Descent (SGD), the impact of other training techniques like momentum, adaptive learning rates, gradient clipping, and normalization on generalization remains largely unexplored. Another avenue for research involves deriving generalization bounds specific to different types of graphs, such as directed graphs, sparse graphs, heterophily graphs, and dynamic graphs, by imposing additional constraints tailored to each graph type. This approach could lead to tighter generalization bounds that offer insights into the properties of diverse graph structures. Besides, deriving a sharp generalization bound that has a lower bound matching the upper bounds is also promising direction to characterize the generalization ability of GNNs precisely. In the preceding discussion, we have scrutinized the generalization capacity of GNNs by examining the generalization boundary in the context of in-distribution generalization, assuming training and testing graph data are drawn from the same distribution. However, real-world scenarios frequently exhibit distribution shifts between training and test data, leading to a notable decline in model performance. Consequently, out-of-distribution(OOD) emerges as a crucial area for evaluating the generalization prowess of GNNs. Despite the proposition and successful implementation of various OOD generalization algorithms with theoretical assurances, systematically reviewing the theory of OOD generalization on graphs poses significant challenges due to the intricate nature of graph-related tasks, diverse distribution shift types such as varying graph sizes and distinct feature distributions, and the evolving GNN frameworks inspired by cross-domain knowledge. Despite these complexities, notable theoretical advancements have been made in OOD generalization on graphs. Xu et al. <cit.> investigate the extrapolation capabilities of GNNs trained via gradient descent concerning algorithm alignment within the aforementioned Neural Tangent Kernel (NTK) framework. Ma et al. <cit.> establish the generalization boundary of GNNs for node classification across any subgroup of unlabeled nodes under distribution shift using the Probabilistic Approximate Correctness (PAC)-Bayesian framework. Additionally, Zhou et al. <cit.> demonstrate that link prediction based on permutation-equivariant node embeddings obtained through GNNs on graphs of increasing size tends to converge to random guessing, thereby compromising OOD generalization capabilities. For an comprehensive overview of methodologies and strategies pertaining to OOD problem on graphs, we highly recommend readers referring to Li et al. <cit.>. § OPTIMIZATION In previous section, we have discussed the generalization and expressive power but typically neglect the training process to obtain such GNNs, which involves the optimization of GNNs. The goal of optimization in the training process of GNNs is to find the optimal parameters that minimize the loss on training samples, which can be expressed as θ = min_θL(θ)≜1/n∑_i=1^nℓ(f_θ(x_i),y_i).[Here we omit the regularization term for simplicity.] The field of optimization theory in deep learning explores the model training procedures, addressing concerns related to the model's convergence towards to optimal solutions and the speed of this convergence. However, compared to the extensive exploration of generalization, expressiveness in GNNs, and their counterparts in deep learning, the study of GNN optimization theory has been rarely explored. This is primarily attributed to the complex training dynamics introduced by graph convolutions and the diverse array of methods and techniques employed to facilitate GNN training. In this section, we will review the theory of optimization of GNNs from three aspects. First, we present the works revealing the dynamics of gradient descent in training GNNs, which is the foundation of optimization. Then, we focus on how the training process of GNNs benefited from some useful training techniques, including weight initialization and normalization. Lastly, we will introduce graph sampling techniques tailored for variance reduction, devised to enhance the efficiency of GNN training processes and bolster the scalability of GNN models. §.§ Dynamics of gradient descent in GNN Gradient descent is a widely used optimization algorithm in deep learning that updates the parameters following the negative gradient of the loss function w.r.t. to the parameters. Although the gradient descent is popular, the dynamics of gradient descent for training GNNs is understudied due to the non-convexity and non-linearity of the graph convolutional operation with non-linear activation. Since the graph convolutional operation is highly related to graph structure, the dynamics of gradient descent in GNNs can promote the understanding of the role of the graph structure in the training of GNNs. In this subsection, we will review the preliminary attempts to analyze the dynamics of gradient descent in GNNs and most of them follow the optimization theory developed in deep learning. To be specific, they usually consider GNNs in linearized or NTK regime or consider shallow GNNs with only one hidden layer. Besides, we only focus on the basic form of gradient descent that is θ_t+1 = θ_t - η∇ L(θ_t) where η is the learning rate. Xu et al. <cit.> study the gradient dynamics of GNNs for the first time via linearized GNN that are GNNs with linear activation while maintaining the non-linear property of the dynamics by utilizing the non-convex loss function. Owing to the highly similar behavior and performance in training linearized and ReLU GNN empirically, the setting is meaningful and can provide insights in understanding the training dynamics of real GNN architectures. Analyzing the gradient dynamics in the form of gradient flow, the authors prove that a multi-layer linearized GNN trained by gradient descent with squared loss converge to its global minimum at a linear rate. The main result is given as follows: Let f be a L-layer linear GNN defined as f(𝐀,𝐗,𝐖)=𝐖_q[𝐖_L(…(𝐖_2(𝐖_1𝐗𝐀)𝐀)…)𝐀]. 𝐖_t represents the collection of parameters at time t>0 with initialization 𝐖_0. ℓ(𝐖_t) denotes the training loss of f with parameters 𝐖_t and ℓ^* denotes the global minimum of the training loss. The loss function used here is squared loss. Then, for any T>0, we have ℓ(𝐖_T)-ℓ^* ≤(ℓ(𝐖_0)-ℓ^*) e^-4 λ_T^(L)ω_min^2(𝐗(𝐀^L)_* ℐ) T, where λ_T^(L) is the smallest singular eigenvalue of the multiplying parameter matrices up to T, that is, λ_T^(L):=inf_[0,T]λ_min((𝐖_t^(1:L))^T𝐖_t^(1:L)) and 𝐖^(1:l):=𝐖_(l)𝐖_(l-1)…𝐖_(1) for any l∈{0,…,L} with 𝐖_t^(1:0):=I. ω_min(·) denotes the smallest singular value of the matrix. (·)_*ℐ represents the sub-matrix composed of the columns indexed by the labeled samples. Theorem <ref> implies the dependence on several factors and further guarantee the linear convergence rate of linearized GNN to the global minimum as long as ω_min^2(𝐗(𝐀^L)_* ℐ)>0 and λ_T^(L)>0 for T>0, which is empirically verified. Besides, the authors further show that the latter condition can be satisfied by proper initialization. Different from Xu's work that analyze the gradient dynamics of GNNs in weight space, Yang et al. <cit.> focus on the evolution of the function learned by GNN with ReLU activation and arbitrary number of layers to demonstrate how GNNs utilize graph structure information during training. To be specific, the authors utilize a node-level GNTK ro prove that the optimization of GNNs actually performs a kernel-graph alignment in NTK regime. Specifically, as proved in Section  <ref>, when the width goes to infinity the kernel matrix will eventually converge to the deterministic kernel matrix in t=0 and be constant during training. Therefore, the GNTK can be viewed as a constant kernel. To perform transformation and propagation step in the form of NTK in each layer, the optimization algorithm incorporates the adjacent matrix into the kernel function in propagation step, which indicates that the gradient descent optimization of GNNs implicitly utilizes the graph structure information to promote training and performs a kernel-graph alignment. Lin et al. <cit.> also study the training dynamics of GNNs with ReLU activation and Gaussian initialization in NTK regime and further take the graph information into consideration by introducing a novel measurement named graph disparity coefficient that quantifies the dissimilarity between graph feature and graph structure i.e. graph Laplacian. The authors provide a high-probability convergence guarantee of over-parameterized GNNs trained by gradient descent to demonstrates that the GNNs can converge to its global minimum with high probability as the width of GNNs increases. Besides, the iterations required to achieve global minimum is O(τ^2poly(D,L,N)) where τ,D,L, and N are graph disparity coefficient, width of GNN, number of layers and number of nodes respectively, which is in accord with the intuition that a small graph disparity coefficient corresponding to a high consistency between graph feature and graph structure can accelerate the convergence. There are also a few works delve into the training dynamics of GNNs with shallow architectures. Zhang et al. <cit.> introduce a learning algorithm characterized by specific tensor initialization and accelerated gradient descent techniques, aiming to facilitate the convergence of one-hidden-layer GNNs to their global minima with zero generalization error. This approach is framed within the context of model estimation, wherein the objective is to reconstruct the parameters of an unknown model sharing an identical architecture. Demonstrated to exhibit linear convergence rates for both regression and binary classification tasks on graphs, this algorithm outperforms conventional gradient descent methods in terms of speed. In a similar vein to Zhang's framework, Awatshi et al. <cit.> postulate that labels stem from an undisclosed one-hidden-layer GNN, employing a more generalized Gaussian initialization scheme for weight matrices and input features. Noting the unsuitability of the prevalent NTK regime, known for its extremely slow convergence in highly over-parameterized neural networks and inconsistent behavior with real-world GNN training dynamics, the researchers leverage dual activation approaches <cit.> to surpass NTK constraints. They establish that a single-hidden-layer message-passing GNN employing ReLU activation, Gaussian initialization, and optimized through gradient descent can converge to an expected loss of ϵ with respect to the squared loss function in O(1/ϵ^2log(1/ϵ)) iterations. Besides gradient descent strategies, Yadati et al. <cit.> present a convex programming framework, which offers a verifiable equivalence to the training process of a two-layer GCN with ReLU activation. This connection bridges the chasm between the non-convex optimization characteristic of GNN training with nonlinear activations and the well-established convex optimization paradigms, marking a pioneering integration of these disparate optimization theories. §.§ Training tricks While the gradient descent algorithm is employed to update GNNs parameters iteratively towards optimal values that minimize the loss function, the practical training process of GNNs can encounter overwhelming challenges like vanishing/exploding gradients and over-smoothing. These issues may impede convergence speed or diminish model performance. As a result, several training strategies have been proposed to mitigate these challenges, some with broad applicability in deep learning and others tailored specifically for graph-related tasks. In this section, we will introduce two categories of training techniques: weight initialization and normalization methods. Weight initialization. The weight initialization is crucial to avoid vanishing/exploding gradient problems in deep learning. To achieve the goal, some well-known initialization methods such as Kaiming initialization <cit.>, Xavier initialization <cit.> and, Lecun initialization <cit.> have been devised to regulate variance consistency across layers during both forward and backward propagation. The forward variance var(x^(l)) and backward variance var(∂ Loss/∂ x^(l)) for l-th layer are computed by the mean of the corresponding variance of each node, that is var(x_i^(l)) and var(∂ Loss/∂ x_i^(l)) respectively, where the loss function is standard cross entropy loss. Although initially tailored for fully connected neural networks and CNNs, these approaches surprisingly exhibit efficacy when applied to more intricate graph convolution layers entailing a message-passing scheme and diverse graph structures. Inspired by this unforeseen adaptability, Li et al. <cit.> delve into analyzing forward and backward variances across layers for message-passing GNNs. They further delineate explicit expressions for these variances by deconstructing the computation graph into distinct message propagation paths and subsequent weight propagation paths. The expressions of var(x_i^(l)) and var(∂ Loss/∂ h_i^(l)) for node i in l-layer are given as follows. var(x_i^(l))=(∏_k_1=0^l-1 m_1^(k_1)/2^l)(∏_k_2=0^l-1var(𝐖^(k_2)))([𝐀^(l) x^0]_i^2), var(∂ Loss/∂ x_i^(l))= (∏_k_1=l+1^L-1 m_2^(k_1)(C-1)/2^(L-l)N^2C) (∏_k_2=l+1^L-1var(w^(k_2)))([𝐀^L-l1]_i^2), where var(w^k) is the variance of the distribution from which the weight matrix of k-th layer are sampled, m_1^(k),m_2^(k) are the input or output dimension of the weight matrix of k-th layer, C is output dimension of the last layer, N is the number of nodes, x^0 is a N-dimensional vector obtained by the mean over elements of the input feature of each node, 𝐀 is the normalized adjacency matrix of the graph with self-loops, [.]_i^2 is the square of i-th element of the vector. From the above two equations, it is observed that the newly derived variance expressions for GNNs additionally take the graph structure information and the message-passing scheme into consideration. Besides, the variance of nodes within the same layer differs due to the variable receptive field, which is a significant difference from the setting in FNNs and CNNs that neurons shares the same variance at each layer. Based on the analysis, the authors propose a novel initialization method for GNNs named Virgo that stabilizes the variance of each node across layers during the forward and backward propagation respectively. Empirically, the Virgo initialization surpasses the performance of other initialization methods developed in FNNs and CNNs in most cases when testing on multiple GNN architectures and graph learning datasets. Normalization. Normalization methods play a vital role in enhancing optimization efficiency and model performance in deep learning practices. These methods typically normalize features, weights, or gradients to impart specific statistical characteristics aligning with diverse objectives, thereby bolstering optimization stability. Following the introduction of BatchNorm <cit.>, numerous normalization techniques have emerged across various domains, with extensive research dedicated to scrutinizing their effectiveness. Nevertheless, minimal attention has been directed towards normalization methods customized for GNNs or evaluating the theoretical suitability of normalization techniques from other domains. Notably, Cai et al. <cit.> first compare the performance of three well-known normalization methods in deep learning that are BatchNorm <cit.>, LayerNorm <cit.> and InstanceNorm <cit.> when adapting to GNNs. The authors show that the InstanceNorm helps GNN converge faster and achieve better performance than BatchNorm while LayerNorm has little effect. Different from previous analysis that owes the success of these normalization to the scale operation <cit.>, the authors focus on the shift operation in the normalization process and further prove that the shift operation in InstanceNorm can be view as a preconditioning of graph diffusion matrix to reduce the condition number, thus brings about smoother optimization and accelerate the convergence speed of training. However, the BatchNorm is less effective since different batches of graph data can have varying statistics which fail to approximate the statistics of all samples precisely. Besides, the authors demonstrate that the standard shift operation can degrade the expressive power of the GNN architecture for losing graph structure information such as the degree of nodes. Therefore, they devise a novel normalization method named GraphNorm that automatically adjusts the step of shift operation to preserve the graph structure information. Besides Cai's work, there are some other graph-specific normalization methods aiming to deal with problems in graph representation learning such as over-smoothing and varying graph size <cit.>. Very Recently, Eliasof et al. <cit.> introduce GRANOLA that adaptively perform normalization on node feature according to the input graph via attaching random feature that we mention in Section <ref> and then passing through an additional GNN, which not only enhances the performance of GNNs across various graph benchmarks and architectures, but also increases the expressive power of the GNN model. §.§ Sampling methods performing variance reduction In recent years, training GNNs on large graph datasets has attracted much attention for growing size of graph data. However, original GNNs fail to be applied to large graphs directly since the training process of GNNs requires the adjacency matrix and feature matrix of the entire graph and intermediate node embeddings computed by the exponentially growing receptive field of each node with respect to the depth of GNNs, which consumes much memory and can result in extremely low convergence rate in large graphs. Therefore, it is necessary to utilize sampling for training GNNs efficiently and improving the scalability of GNNs <cit.>. Generally, current research efforts in this domain are typically classified into four categories: node-wise sampling, layer-wise sampling, (sub)graph-wise sampling, and sampling methodologies tailored for heterogeneous graphs. For a more detailed and comprehensive survey of sampling methods in GNNs, readers are recommended to consult Liu et al. <cit.>. While in this section, we mainly focus on the different approaches to perform variance reduction in various sampling methods that can guarantee the quality of sampling in GNNs from the theoretical perspective. Minimizing the variance is a widely-used optimization objective for an unbiased sampler since the sampling methods in GNNs only select part of nodes, which inevitably introduce variance and bias that can cause performance degradation and low convergence speed. Historical activation.The historical activation method leverages past node embeddings as an approximation of true neighbor node embeddings for variance reduction in aggregation. This approach sidesteps the need for recursively computing neighbor node embeddings, thereby enhancing training efficiency. Chen et al. <cit.> introduce VR-GNN, which initially employs this technique in node-wise sampling as a control variate. They subsequently conduct a variance analysis to compare the variances of the control variate with neighbor sampling techniques utilized in GraphSAGE <cit.>. The expression of these two variances is given in the following equation: Var[CV_u^(l)] = C∑_v_1∈𝒩(u)∑_v_2∈𝒩(u)(𝐀_uv_1Δ x_v_1^(l)-𝐀_uv_2Δ x_v_2^(l))^2 Var[NS_u^(l)] = C ∑_v_1∈𝒩(u)∑_v_2∈𝒩(u)(𝐀_uv_1x_v_1^(l)-𝐀_uv_2x_v_2^(l))^2 where Var[CV_u^(l)] and Var[NS_u^(l)] denotes the variance of control variate and neighbor sampling respectively, C is an constant that depends on the input graph, Δ x_v^(l) = x_v^(l)-x_v^(l) denotes the difference between real activation and historical activation which is small when the parameters of GNNs does not change much. Since the x_v^(l) is often significantly smaller than x_v^(l), the historical activation do help reduce the variation. Besides, the authors prove that training GNNs with the control variate by SGD converges to a local minimum with a convergence rate of 𝒪(1/√(T)) regardless of sampling, which indicates the variance of the approximation can be zero eventually. Similarly, Cong et al. <cit.> utilize historical node embeddings from the preceding layer to diminish variance during forward propagation, employing a layer-wise sampling approach. While the historical activation method proves effective in variance reduction, it necessitates extra memory usage, particularly in node-wise sampling techniques where each node's receptive field continues to grow exponentially. Importance sampling. Importance sampling is a commonly employed technique for variance reduction, used to approximate the expectation over distribution p with respect to a distribution q in a Monte Carlo estimator by reweighting samples. This process can be formulated as follows. 𝔼_p[f(x)]=∫ f(x)p(x)dx=𝔼_q[f(x)(p(x)/q(x))], where q(x) and p(x) have the same support of distribution and p(x)/q(x) is called the importance function. Chen et al. <cit.> propose Fast-GCN that first utilizes importance sampling in GNNs to reduce the variance by viewing the forward propagation of GNN as integral transforms of embedding function under a sampling distribution, which is given in the following equation: x_v^(l+1)=ϕ(∫𝐀_vux_u^(l)(u)𝐖^(l)dP(u)), where P is the sampling distribution. Then the authors sample a fixed number of nodes in each layer independently to approximate the embedding function under a specific importance distribution. The total average of sample approximation is given in the following equation. G_st = 1/st∑_i=1^s∑_j=1^t( 𝐀_v_iu_j x_u_j^'(dP(u)/dQ(u)|_u_j)), where s,t are the numbers of sampled nodes in the l+1-th and l-th layer respectively, v_i,u_j are sampled nodes in the l+1-th and l-th layer respectively and x_u^' equals to x_u^(l)𝐖^(l). To minimize the variance of Equation <ref>, the optimal distribution Q should satisfy dQ(u)=b_u|x_u^'|dP(u)/∫ b_u|x_u^'|dP(u), where b_u equals to [∫𝐀_vu^2dP(v)]^1/2. However, the |x_u^'| is expensive to compute for changing constantly thus the authors utilize sampling probability proportional to ||𝐀_:,u||^2 instead that is fixed during training. Different from Chen's work that samples nodes in each layer independently, Huang et al. <cit.> adopt a top-down layer-wise sampling manner in which nodes sampled in the lower layer rely on the nodes sampled in the upper layer, which takes the connection between layers into account. The approximation of x_v_i^(l+1) using adaptive layer-wise importance sampling is given in the following equation: ϕ(N(v_i)𝔼_q(u_j|v_1,…,v_n)[p(u_j|v_i)/q(u_j|v_1,…,v_n)]x_u_j^(l)𝐖^(l)), where N(v_i) equals to ∑_j=1^n𝐀_v_iu_j, p(u_j|v_i) is the probability of sampling u_j when v_i is given that equals 𝐀_v_iu_j/N(v_i), q(u_j|v_1,v_2,…,v_n) is the probability of sampling u_j when all the nodes in the upper layer are given. Then Equation <ref> can be approximated via a Monte-Carlo estimator and the variance is given as follows. Var = 1/n^'𝔼_q(u_j)[(p(u_j|v_i)|x_u_j^(l)|-μ_q(v_i)q(u_j))^2/q^2(u_j)], where q(u_j) denotes q(u_j|v_1,v_2,…,v_n), μ_q(v_i) is the expectation of the estimator. The optimal that minimize the variance is given in the following equation: q^*(u_j) = p(u_j|v_i)|x_(u_j)^(l)|/∑_j=1^np(u_j|v_i)|x_u_j^(l)|. However, the computation of optimal distribution in Equation <ref> is impractical since x_u_j^(l) is inaccessible in a top-down sampling manner. Therefore, the authors replace x_u_j^(l) with a learnable function g(x(u_j)) and further add the variance to the loss function to explicitly reduce the variance during training. Alternatively, Liu et al. <cit.> propose a bandit sampling method that optimizes the variance from the perspective of the bandit to handle the uncomputable term. The approximation ratio of the variance of the obtained bandit sampler is proved to approach 3 asymptotically. Zou et al. <cit.> also consider layer-dependent importance sampling in a top-down manner while the sampling is only performed among the union of a neighborhood of nodes in the upper layer to maintain the density of adjacency matrix between layers. Instead of sampling nodes in each layer, Zeng et al. <cit.> sample subgraphs of the original training graph to build a mini-batch for training GNNs efficiently. To be specific, the subgraphs are constructed via samplers for node and edge respectively with probability distribution derived by importance sampling aiming to minimize the variance. Notably, the optimal sampling probability of an edge e=(u,v) is given by p_e = m/∑_e^'||∑_l𝐛_e^'^(l)||||∑_l𝐛_e^(l)|| where 𝐛_e^(l)=𝐀_vux_u^(l-1)+𝐀_uvx_v^(l-1). To alleviate the computation burden, the simplified probability distribution is proportional to 1/deg(u)+1/deg(v) that only depends on graph topology, which can be explained by the intuition that two connected nodes with few neighborhoods tend to have a big influence to each other and sampled in the same subgraph with high probability. Different from previous works, Cong et al. <cit.> apply a gradient-based adaptive importance sampling to reduce stochastic gradient variance during the training process of GNNs optimized by SGD. To be specific, the authors utilize the estimated norm of gradient and then calculate the importance distribution to sample nodes with minimal variance. §.§ Discussion In this section, we have reviewed the theory of optimization of GNNs in terms of the dynamics of gradient descent in training GNNs, the training tricks as well as the sampling methods for training GNNs efficiently. Besides the theoretical results, we also introduce some practical methods developed to improve the training process of GNNs. Although GNNs can be properly trained currently, many challenges in delving into the theory of optimization still remain. One major challenge is to study the dynamics of gradient descent for training general GNNs since the existing works either consider shallow GNNs instead or analyze the optimization of deep GNNs in linearized or NTK regime, which is quite different from the realistic GNN architecture or training behavior of gradient descent. Furthermore, characterizing the dynamics of another optimization algorithm such as SGD and Adam for training GNNs is worth exploring. Besides the GNN architectures and optimization algorithms, it is necessary to incorporate structure information of graphs into the analysis of training process of GNNs. It remains a mystery how the graph structure influence the loss landscape of GNNs exactly. For training tricks in GNNs, extra training tricks can be taken into account and a more careful convergence analysis of different methods to derive the concrete convergence rate is desirable. In addition, since the mentioned training tricks are often utilized jointly during the training process of GNNs, it is meaningful to analyze the interplay of different training tricks or study the effectiveness of applying one individual training trick separately in order to provide a more solid theoretical guarantee. Besides, the connection between the optimization and the other two perspectives that are generalization ability and expressive power of GNNs is not well understood. From this point, some works have studied the generalization bound of GNNs under specific optimization algorithms <cit.> and the extra expressive power obtained from some training tricks such as skip connection a normalization <cit.>. § GNNS FOR LONG-RANGE AND HIGH-ORDER INTERACTIONS In practical applications, it is uncommon for researchers to implement GNNs with more than 4 layers, in contrast to that in CNNs where a larger number of layers are typically used to capture long-range dependence. This discrepancy is primarily attributed to the challenges faced by deep GNNs, such as over-smoothing <cit.> and over-squashing <cit.>, which results in considerable performance degradation. These issues have attracted significant attention in graph learning community, impeding the effective deployment of GNN architectures. Thus, this section will delve into the theoretical background and proposed solutions to above-mentioned phenomena. §.§ Over-smoothing We first introduce the definition of over-smoothing where the features of nodes in a graph tend to converge and lose their distinctions as the number of layers in a GNN increases<cit.>. This convergence diminishes the informative content carried by the nodes' features, resulting in a significant decline in GNN performance. This observation contradicts the common belief that increasing the number of layers continuously enhances performance, which gains insights into the underlying mechanisms of GNNs and further impedes strategies to overcome the issue for robust GNN models. §.§.§ Measurements Currently, the main measurements for quantifying over-smoothing are Dirichlet energy <cit.> and MAD <cit.> which are based on the similarity between different nodes. The definitions of the two measurements in the l-th layer are given as follows. Dirichlet energy on graphs: ℰ(𝐗^(l))=1/N∑_i=1^N∑_j∈𝒩_i||x_i^(l)-x_j^(l)||_2^2. Mean average distance(MAD) on graphs: μ(𝐗^(l))=1/N∑_i=1^N∑_j∈𝒩_i(1-x_i^(l)^⊤ x_j^(l)/||x_i^(l)|| ||x_j^(l)||). According to the above definitions, the Dirichlet energy can be interpreted as the average norm of gradients, while the MAD calculates the average cosine similarity between all node pairs. A recent work by Rusch et al. <cit.> introduces a refined definition of over-smoothing that is more rigorous and manageable. According to their definition, over-smoothing is characterized as the exponential decline of node similarity towards zero with an increase in the number of layers and the Dirichlet energy emerges as a superior metric compared to MAD that has been empirically supported. §.§.§ Theory The cause of over-smoothing has been illustrated via studying the role of graph convolution layer in transforming node feature. Li et al. <cit.> provide a explanation of over-smoothing for the first time via demonstrating that the graph convolution is essentially a special form of Laplacian smoothing. To be specific, the graph convolution can be obtained by replacing the the normalized Laplacian 𝐃̃^-1𝐋̃ with symmetric normalized Laplacian 𝐃̃^-1/2𝐋̃𝐃̃^-1/2 in standard Laplacian smoothing (𝐈-𝐃̃^-1𝐋̃)𝐗. From the observation that the Laplacian smoothing updates the feature by aggregation the feature of its neighborhood (including itself) to make the feature within the same cluster similar, they further prove that the embedding of nodes within a connected component will converge to the same value after implementing Laplacian smoothing for many times. Suppose that a graph has no bipartite component and has k connected {C_i}_i=1^k. Let 1^(i)∈ℝ^n be the vector that indicates whether a node is in component C_i i.e. 1^(i)_j=1 if node j is in component C_i otherwise 0. Then for any w∈ℝ^n and α∈(0,1], lim_m →∞(𝐈-α𝐃^-1𝐋)^m w= [1^(1), 1^(2), …, 1^(k)]θ_1, lim_m →∞(𝐈-α𝐃^-1/2𝐋𝐃^-1/2)^m w= 𝐃^-1/2[1^(1), 1^(2), …, 1^(k)]θ_2, where θ_1∈ℝ^k, θ_2∈ℝ^k. Theorem <ref> can be directly applied to graphs with self-loop that have no bipartite component, which implies that the features do converge to linear combination of {1}_i=1^k or { D^-1/21}_i=1^k thus become indistinguishable and cause over-smoothing. Different from Li's work, Xu et al. <cit.> connect the influence distribution of a node spread by message passing scheme to random walk <cit.>. Since the random walk distribution will ultimately converge to its limit distribution that only depends on the graph structure, the representation of different nodes after multiple GCN layers will carry little local information that results in over-smoothing. On the other hand, Oono and Suzuki <cit.> represent the propagation of GCN as a dynamical system to characterize the asymptotic behavior of GCNs as the number of layers goes to infinity. The authors provide the following theorem to elaborate over-smoothing. Consider an undirected graph augmented with self-loops, for any input feature 𝐗^(0) of the graph, the output of a l-layer non-linear GCN activated by ReLU 𝐗^(l) satisfies d_ℳ(𝐗^(l))≤(sλ)^l d_ℳ(𝐗^(0)), where s is the upper bound of the maximum singular value of weight matrix, λ is the largest absolute value of the non-one eigenvalue of the augmented normalized adjacency matrix, and d_ℳ(𝐗) is the distance between 𝐗 and an invariant space that corresponds to eigenspace associated with the eigenvalue 1. In particular, the d_ℳ(𝐗^(l)) exponentially converges to 0 if sλ<1. The Theorem <ref> shows that the output of GCN will exponentially converge an invariant space if the weight satisfies the condition determined by the augmented normalized adjacency matrix(or augmented normalized Laplacian). Since the invariant space is the subspace spanned by the eigenvector corresponding to eigenvalue 1 that only carry information of the connected components and node degrees, the GCN fails to distinguish nodes with different degrees within the same connected component thus suffers from over-smoothing. It is surprising to find that the result is essentially identical to that of linearized GNNs <cit.> which indicates that non-linear activation ReLU is independent of over-smoothing. Besides, from the perspective of graph signal processing, the invariant space corresponding to the lowest frequency of the graph Laplacian agrees with the statement in NT <cit.> that the graph convolution layer is essentially a low-pass filter. §.§.§ Solutions In this subsection, we briefly introduce the methods proposed for alleviating and overcoming the over-smoothing. Normalization and regularization. Normalization and regularization are methods that directly based on the definition and measurement of over-smoothing to handle the problem <cit.>. They all impose additional constraints obtained by the measurements on GNNs, but normalization method achieve the goal by normalizing the feature embedding and regularization method satisfy them via regularization. For example, NodeNorm <cit.> normalizes the feature to keep the total pairwise squared distance(TPSD) a constant across every layer while Zhou et al. <cit.> regularize the Dirichlet energy within a suitable range for each layer. Skip connection. Skip connection <cit.> is found effective in solving the problem of gradient explosion and gradient vanishing in deep CNNs. Motivated by it, some works add skip connection to GNNs as an attempt to alleviate oversmoothing in deep GNNs <cit.>. They preserve fraction of initial and intermediate feature in the final embedding during the neighborhood aggregation. For example, JKnet <cit.> combines all previous embedding in the last layer while GCNII <cit.> conducts skip connection in each layer. Intuitively, the feature in shallow layers is more distinguishable thus the method is reasonable. Physics inspired equations. Recently, some works resort to ordinary differential equations(ODEs) or partial differential equations(PDEs) derived from physics to copy with over-smoothing <cit.>. To be specific, yhey utilize the physical equations such as diffusion and gradient flows to represent a distinct dynamics beyond original message passing scheme on graphs and then solve them by discretizing the equations to generate novel GNN architectures that are more powerful to solve over-smoothing. For example, Rusch et al. <cit.> propose GraphCON based on graph-coupled oscillator and prove that the zero-Dirichlet energy steady states are not stable in the system, which prevent form over-smoothing. Graph rewiring. Graph rewiring serves as a method to mitigate over-smoothing by adjusting graph topologies. DropEdge <cit.> is a straightforward technique for graph rewiring. By randomly removing edges and decreasing node connections, this approach alleviates over-smoothing effects. Additionally, the authors offer a theoretical insight into the technique, demonstrating that DropEdge can decelerate the convergence rate of GNNs towards the constant space outlined in the previous section, thereby reducing information loss. Besides, Chen et al. <cit.> and Hasanzadeh et al. <cit.> propose strategies to modify the graph topology adaptively. §.§ Over-squashing In contrast to over-smoothing, which hinders the performance of deep GNNs, over-squashing presents challenges in effectively learning long-range interactions. Specifically, as the number of GNN layers increases, the receptive field of a node expands exponentially. This leads to an excessive amount of information being compressed into a fixed-length feature vector, resulting in an information bottleneck known as over-squashing. Consequently, during message passing, long-range information gets distorted, impacting the performance of GNNs on tasks involving large graphs and long-range dependencies. Measurements. One direct measurement to quantify over-squashing is Jacobian matrix ∂ x_j^(d)/∂ x_i that analyze the impact on the feature of node j by input feature of node i at distant r <cit.>. Topping et al. <cit.> further proves an upper bound of the Jacobian in the following equation: |∂ x_j^(r+1)/∂ x_i|≤ (α_1α_2)^r+1(𝐀^r+1)_ji, where α_1 and α_2 are upper bound of the norm of the gradient of update function and aggregation function respectively. In Equation <ref>, the norm of Jacobian matrix is controlled by the corresponding entry of the power of the augmented normalized adjacent matrix, which suggests that the information from distant node decays exponentially during message passing thus result in over-squashing. Later, the result is generalized to any pair of nodes and the norm of Jacobian is bounded more precisely <cit.>. Solutions. The above discussion highlights the importance of amplifying information flow from distant nodes to alleviate over-squashing in graphs and the aforementioned graph rewiring is an effective to achieve the goal. There are wo main types of graph rewiring methods that are spatial graph rewiring <cit.> and spectral graph rewiring  <cit.>. The spatial graph rewiring usually connect one node to another node within its receptive field while spectral graph rewiring often optimize some metrics that measures connectivity of graphs. Therefore, the spatial graph rewiring methods modify the edges in a more local manner than spectral graph rewiring methods. For example, Topping et al. <cit.> propose Balanced Forman curvature of edges to quantify the over-squashing between nodes at distance 2 and further address the negatively curved edges that are susceptible to over-squashing. On the other hand, the quantities for spectral graph rewiring methods to optimize include spectral gap <cit.>, commute time <cit.> and total effective resistance <cit.>, which are defined globally and closely related to the well-known Cheeger constant in measuring connectedness of the whole graph. It is also worth noting that the graph transformer <cit.> can be viewed as an extreme case of spatial graph rewiring for considering fully-connected graphs. However, the interplay between these two categories and the theoretical superiority of one over the other remain unexplored, making it a promising direction for future research. Besides the graph rewiring methods, there are some other methods to capture the long-range dependencies between node thus alleviate over-squashing. One effective method is the physics inspired GNNs mentioned in over-smoothing that changes the dynamics of message passing <cit.>. Since the method changes the information flow and alters the connectivity of the graphs essentially, they actually perform graph rewiring implicitly. In addition, other methods handle over-squashing problem from their own perspective such as graph imbalance learning <cit.>, expressive powerful <cit.> and reservoir computing model that is training-free <cit.>. For a more detailed and comprehensive survey of over-squashing, we highly recommend readers refer to Akansha et al. <cit.> and Shi et al. <cit.>. §.§ Discussion In this section, we have reviewed the of theory and solutions of over-smoothing and over-squashing phenomena in GNNs, which prevent the GNN architectures from going deeper and further capturing high-order and long-range interactions between nodes. Next, we outline some open questions for over-smoothing and over-squashing. Trade-offs between over-smoothing and over-squashing. In recently years, an increasing number of works based on graph rewiring focus on handling both over-smoothing and over-squashing <cit.> and the tradeoffs between between over-smoothing and over-squashing is emphasized. Intuitively, the graph rewiring methods to alleviate over-squashing often introduce additional edges to original graph, which exerts a smoothing effect on the graph thus poses a risk of over-smoothing. To provide a more theoretical explanation, the tradeoffs can be analyzed from the perspectives of spatial and spectral methods. Nguyen et al. <cit.> utilize Ollivier-Ricci curvature to establish a geometric connection between over-smoothing and over-squashing in a unified framework. To be specific, the positive graph curvature is associated to over-smoothing while the positive graph curvature is related to over-squashing. Different from Nguyen's analysis that is based on the theory of spatial method, Giraldo et al. <cit.> study the spectral gap of linearized GNNs and reveal the relationship between over-smoothing and over-squashing. Besides, it is noted that the tradeoffs between over-smoothing and over-squashing can be viewed as a compromise between locality and connectivity of the graph, which provides a novel perspective to explore the interplay of spatial and spectral graph rewiring methods. Connection with heterophily. The heterophily problem has drawn much attention recently since the majority of GNNs following homophily assumption that nodes with similar feature or identical labels tend to connect each other fail on heterophilic graphs. Although the heterophily and over-smoothing seem to be two independent problem, some works aimed to solve over-smoothing also perform well on heterophilic graphs empirically <cit.> and vice versa <cit.>. Therefore, it remains an open question that whether the two problem can be solved simultaneously from a theoretical perspective. Yan et al. <cit.> establish the connection between over-smoothing and hetetephily problem for the first time by studying the behavior of linear SGC on random graphs. Specifically, they analyze the change of node representation after message passing in terms of two quantities called relative degree and homophily level respectively. Parallel to Yan's work, Bodnar et al. <cit.> utilize heat diffusion PDE to explain the susceptibility of GNNs to over-smoothing and heterophily. Furthermore, the authors analyze the problems more precisely from a topological perspective based on (cellular) sheaf theory <cit.>. Besides, it is noted that both of the works prove that GNNs with signed message can handle the two problems simultaneously. § CONCLUSION This survey attempts to furnish a comprehensive overview of theoretical foundations and advancements in graph learning. Given its status as a vibrant research domain intertwined with a diverse array of mathematical linkages, it proves unfeasible to encompass all existing works within the scope of this study. The selections covers three main topics, namely: expressiveness power, generalization ability, optimization techniques. Additionally, the long-range and high-order interaction of GNNs that are populaar topic in recently are also elaborated. In each section, we introduce necessitate preliminaries, systematically elaborate the theoretical findings, and discuss the limitations as well as future directions. In particular, our approach tailors the exposition of each topic with respect to its developmental vein and orientation. The expressive power of GNNs often correlates with their ability to distinguish non-isomorphic graphs. We first delve into the theoretical connects GNNs with the extensively studied Weisfeiler-Lehman (WL) algorithm, exploring various research frontiers like strategies to transcend the constraints of the 1-WL and discussing novel architectures such as graph transformers and geometric GNNs. Regarding generalization, the literature is structured around the tools utilized for deriving the generalization bound, elucidating pivotal findings and insights, outlining constraints, and highlighting emerging patterns. In the realm of algorithm optimization processes, we first discuss gradient dynamics of GNNs and present theoretical analyses of several training tricks as well as sampling methods. Despite significant research efforts and notable strides forward, persistent challenges persist in the theoretical process of graph models, notably stemming from the intricate nature of inter-node relationships and the convolutional layers or units inherent to models for graph-structured data. Analytical simplifications applied to either graph properties or model architectures lead to findings that may lack practical relevance across diverse real-world applications. While expressiveness, generalization, and optimization are conventionally addressed in isolation, there is increasing interest in exploring their interplay to enhance the efficacy of graph neural networks. Key questions arise, such as how optimization strategies like gradient descent and skip connections influence both generalization and expressive power. Additionally, understanding the interconnections between generalization capacity and expressive power, particularly in the context of high-order GNNs <cit.>, remains a compelling area for further investigation. Besides, since the graphs in real-world have various forms and can be extremely complex, the theoretical results that are able to reflect the intricate graph structure and intrinsic property of graphs are expected. Against the backdrop of groundbreaking advancements in foundational models of computer vision and natural language processing, establishing theoretical frameworks for leveraging large language models on graphs and potentially formulating graph-based foundational models emerges as a critical research frontier. By embracing the challenges and opportunities that lie ahead, we believe more research endeavors will make for continued advancements and transformative impact for graph learning community. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT IEEEtran [ < g r a p h i c s > ]Yu Huang received his B.S. degree in Data Science and Big Data Technology from the University of Science and Technology of China(USTC), China, in 2022. He is currently pursuing a Ph.D. degree in the Department of Computer Science and Technology at University of Science and Technology of China(USTC), China. His research interests focus on graph representation learning and deep learning theory. [ < g r a p h i c s > ]Min Zhou (Member, IEEE) is currently a Principal Research Engineer of Huawei, Shenzhen, China. She received the B.S. degree in Automation from the University of Science and Technology of China, and the Ph.D. degree from Industrial Systems Engineering and Management Department, National University of Singapore, respectively. Her interests include pattern mining and machine learning, and their applications in sequence and graph data. Her several works related to graph learning and mining were published at top conferences, including KDD, ICML, NeuRIPS, WWW, ICDE, and SIGIR. [ < g r a p h i c s > ]Menglin Yang earned his Ph.D. from The Chinese University of Hong Kong, supervised by Prof. Irwin King. His research interests include hyperbolic graph learning and machine learning. Besides, he also focuse on real-world applications, including recommender systems, knowledge graph, drug processing. His several works related to hyperbolic graph representation learning were published at recent top conferences, including KDD, WSDM, WWW. [ < g r a p h i c s > ]Zhen Wang received the Ph.D. degree from the Department of Computer Science at Sun Yatsen University (SYSU) in 2017. He is an associate professor at Sun Yat-sen University. Before that he worked as a senior algorithmic expert at DAMO Academy. He has published several papers at peer-reviewed conferences, such as ICML and KDD, and has received research awards, such as KDD ADS Track Best Paper Award. His research interests include graph representation learning and federated learning. [ < g r a p h i c s > ]Muhan Zhang (Member, IEEE) received his Ph.D. degree in Computer Science from Washington University in St. Louis in 2019. He is now an assistant professor and Boya Young Fellow of Peking University. As a pioneer researcher of Graph Neural Networks, he is known for inventing several classic GNN algorithms such as SEAL for link prediction and DGCNN for graph classification. He regularly serves as an area chair for NeurIPS, ICML, and ICLR. He is a reviewer for top journals such as JMLR, TPAMI, TNNLS, and TKDE. [ < g r a p h i c s > ]Jie Wang Jie Wang received his BS degree from University of Science and Technology of China (USTC), in 2005, and his PhD degree from Florida State University, in 2011, respectively. He was a Research Assistant Professor with University of Michigan from 2015 to 2017. He is currently a Professor with the School of Information Science and Technology, the vice dean of the School of the Gifted Young, and the Deputy Director of the National Key Laboratory of "Brain-inspired Intelligent Perception and Cognition”, at USTC. He serves as an Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, and as Area Chairs of major data mining and machine learning conferences, including ICML, NeurIPS, and SIGKDD. His research is dedicated to machine learning theory, algorithms, and applications, with special interests in graph machine learning, large language models, AI for science, AI for chips, learn to optimize, and reinforcement learning. [ < g r a p h i c s > ]Xie Hong is currently a research professor at School of Computer Science and Technology, University of Science and Technology of China (USTC). He is a member of Prof. Enhong Chen's research group. He received Ph.D. degree in the Department of Computer Science and Engineering at The Chinese University of Hong Kong (CUHK) in 2015, proudly under the supervision of Prof. John C.S. Lui. He received his B.Eng. degree from the School of Computer Science and Technology at USTC in 2010. Hong Xie was a postdoctoral research fellow at Department of Computing Science and Engineering, CUHK hosted by Prof. John C.S. Lui, and a postdoctoral research fellow at School of Computing, Nationial University of Singapore hosted by Prof. Richard T.B. Ma . He was also a faculty member at Chongqing University. He is a member of CCF, a member of IEEE and a member of ACM. [ < g r a p h i c s > ]Hao Wang is currently an associate researcher of the School of Computer Science and Technology, USTC. His main research interests include data mining, representation learning, network embedding and recommender systems. He has published several papers in referred conference proceedings, such as IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Information Systems, NeuriPS, and AAAI. [ < g r a p h i c s > ]Defu Lian received the PhD degree in computer science from the University of Science and Technology of China (USTC), Hefei, China, in 2014. He is currently a professor with the School of Computer Science and Technology, USTC. He has published prolifically in referred journals and conference proceedings, such asIEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transaction onKnowledge and Data Engineering and ACM Transactions on Information Systems, Conference on Neural Information Processing Systems, IEEE International Conference on Data Mining (ICDM), ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM International Conference on Research on Development in Information Retrieval, International Joint Conferences on Artificial Intelligence, and ACM International World Wide Web Conferences. His current research interest includes spatial data mining, recommender systems, and learning to hash. [ < g r a p h i c s > ]Enhong Chen(Fellow,IEEE) received the PhD degree in computer science from the University of Science and Technology of China (USTC), Hefei, China, in 1996. He is currently a professor and the vice dean of the School of Computer Science, USTC. He has published more than 200 papers in refereed conferences and journals, including the IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Mobile Computing, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), IEEE International Conference on Data Mining (ICDM), Conference on Neural Information Processing Systems, and ACM International Conference on Information and Knowledge Management. His current research interests include data mining and machine learning, social network analysis, and recommender systems. He was a recipient of the Best Application Paper Award on KDD2008, the Best Research Paper Award on ICDM-2011, and the Best of SIAM International Conference on Data Mining (SDM)-2015. He was on the program committees of numerous conferences, including KDD, ICDM, and SDM. 1 IEEEhowto:kopka H. Kopka and P. W. Daly, A Guide to , 3rd ed.1em plus 0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999. Michael Shell Biography text here. John Doe Biography text here. Jane Doe Biography text here.
http://arxiv.org/abs/2407.01827v1
20240701215831
Cubic equations with 2 Roots in the interval $[-1, 1]$
[ "Helmut Ruhland" ]
math.NA
[ "math.NA", "cs.NA", "12D10, 26C10" ]
Cubic equations with 2 Roots in an interval] Cubic equations with 2 Roots in the interval [-1, 1] H. Ruhland]Helmut Ruhland Santa Fé, La Habana, Cuba helmut.ruhland50@web.de [2020]Primary 12D10; Secondary 26C10 § ABSTRACT The conditions for cubic equations, to have 3 real roots and 2 of the roots lie in the closed interval [-1, 1] are given. These conditions are visualized. This question arises in physics in e.g. the theory of tops. [ [ July 8, 2024 ================ § INTRODUCTION Cubic polynomials are ubiquitous in physics. I cite here just some examples from the introduction in <cit.>: "…The applications of cubic and quartic equations in all branches of science are vast. …There are well over 200 real gas equations, many of which are also cubic. The elastic waves propagating on the surface of solids, the so-called Rayleigh waves …The Hodgkin-Huxley model in mathematical neuroscience encounters a quartic …In general relativity, through the d'Inverno and Russel-Clark algorithms, the Petrov classification of the Weyl conformal curvature …" The question, if a cubic polynomial has only real roots, can be decided using the discriminant condition. This question can be extended to the question: When has a cubic polynomial 0 … 3 roots in a given open or closed interval? This question arises e.g. in physics in the theory of tops, i.e. rigid bodies that move under the influence of gravity around a fixed point with 3 degrees of freedom (the 3 Euler angles). Here in the case of nutation the upper and lower limits for cos (θ) are given by 2 real roots of a cubic in the closed interval [-1, 1]. The 3^rd real root lies outside this interval, see appendix <ref>. Description of the problem treated in this article: Determine the conditions under which exactly 2 roots of a monic cubic polynomial x^3 + a x^2 + b x + c lie in the closed interval [-1, 1]. § THE CONDITIONS FOR A CUBIC POLYNOMIAL WITH 2 ROOTS IN [-1, 1] Let P = x^3 + a x^2 + b x + c be a monic cubic polynomial. Assume the discriminant condition D_3 ≥ 0 for 3 real roots is fulfilled. Calculate these 5 quantities: A = a + b + c + 1 B = a - b + c - 1 A_T = 4 (c + 1) B_T = 4 (c - 1) E = (a - c) c - b + 1 Distinguish 3 cases depending on c: 1. 𝐜 < 0 Replace a → - a, c → - c, this is the map M : x → - x. Now c > 0 and we get one of the following cases. 2. 0 ≤ 𝐜 ≤ 1 ( A < 0 and B ≤ 0 ) or ( A ≥ 0 and B > 0 ) or ( A > A_T and B = 0 ) or ( A = 0 and B < B_T ) 3. 𝐜 > 1 ( A ≤ 0 and B ≤ 0 ) or ( A ≥ 0 and B ≥ 0 and E ≥ 0 ) § THE DISCRIMINANT SURFACE D_3 = 0 §.§ The 2 components of the discriminant surface In the a-b plane the discriminant surface consists of a parabola for c = 0, it consists of 2 components for c 0: For c > 0 a smooth component at the left of P_b and below of P_a, the 2 parabolas in the following figure <ref>. The parabolas intersect at (0, 0) and have perpendicular axes. The second component, smooth with the exception of a cusp is located inside the 2 parabolas. All cusps lie on the parabola P_C: a_C = 3 c^1/3 b_C = 3 c^2/3 P_C : a^2 - 3 b The 2 parabolas are defined by: P_a = b^2 - 4 a c P_b = a^2 - 4 b The 2 components approach in the limit a, b → + ∞ to the parabolas. To see this e.g. for P_b replace b in the equation for D_3 by a^2 / 4. The two terms with a^6 cancel. The remaining terms are of size O (a^4). So for b → + ∞ the components 1 and 2 of D_3 approach to the parabola P_b. Component 1 from outside, component 2 from inside. For P_a replace a in the equation for D_3 by b^2 / (4 c) ... < g r a p h i c s > figurec = 1, the 2 components of D_3 = 0, the 2 parabola P_a, P_b The cusp is located in the dark grey shaded lens. The cusps for all c lie on the red parabola P_C. The parabola P_C : a^2 - 3 b also shows up in the discriminant of the differentiated cubic D_2 = 4 (a^2 - 3 b) = - 12 b^* with b^* the second coefficient in the depressed cubic x^3 + b^* x + c^*. For 3 real roots besides D_3 ≥ 0 this D_2 has to be > 0. Figure <ref> visualizes that D_3 ≥ 0 already implies D_2 > 0. §.§ The intersection of the planes A and B with the discriminant surface Define the following 2 planes: A = a + b + c + 1 B = a - b + c - 1 The planes A = 0 and B = 0 represent polynomials with a root +1 or -1 A_T = 4 (c + 1) B_T = 4 (c - 1) A_I 1/2 = 2 (c + 1 ± 2 √(c)) For c ≥ 0: The discriminant D_3 intersects with the plane B = 0 in 2 lines A = A_I 1/2 (the subscript I means intersect) and D_3 is tangent to B at the line A = A_T (the subscript T means tangent or touch). D_3 is tangent to the plane A = 0 at the line B = B_T and doesn't intersect A (the two B_I 1/2 are not real). The intersection of the 2 planes A = 0 and B = 0 in a line represent the polynomials (x - 1) (x + 1) (x - c). A = 0 and B = B_T represent the polynomials (x - 1)^2 (x + c), double roots because it's an intersection (tangent) with D_3. B = 0 and A = A_T represent the polynomials (x + 1)^2 (x + c). B = 0 and A = A_C represent the polynomials (x + 1) (x - √(c))^2, double roots because it's an intersection (though not tangent) with D_3. E is a ruled surface, for fixed c a line. The lines (A = 0, B = B_T) and (B = 0, A = A_T) lie in this surface E. Used in the condition <ref>, figure <ref> and <ref> for the case c > 1 to distinguish a different number of roots in the same quadrant: E = (a - c) c - b + 1 The line A_T, B_T is defined by A / A_T + B / B_T - 1 = 0. It follows E = (A B_T + B A_T - A_T B_T) / 8. § THE CUBIC POLYNOMIALS WITH 2 ROOTS IN THE INTERVAL Colours in the following figures: the discriminant in blue (for c = 0 a parabola and a double line b = 0) numbers in red show the number of roots in the interval light green shaded open regions inside the 2 quadrants built by the lines A and B dark green open lines, the corresponding polynomial has 2 roots in the interval dark green bullet, end point of a closed line with 2 roots in the interval §.§ The case c < 0 Replace a → - a, c → - c, this is the map M : x → - x. Now c > 0 and we get one of the following cases. §.§ The case c = 0 Looking at the following figure <ref>, we get this condition: ( A < 0 and B ≤ 0 ) or ( A ≥ 0 and B > 0 ) or ( A > A_T and B = 0 ) or ( A = 0 and B < B_T ) < g r a p h i c s > figurec = 0, i.e. a root 0 and roots of the quadratic polynomial x^2 + a x + b, in red the number of roots of the quadratic in the interval [-1, 1], the green lines represent polynomials with 1 root in the interval the blue curve is the parabola for the discriminant D_2 = 0. §.§ The case 0 < c < 1 The condition is the same <ref> as for the previous case. < g r a p h i c s > figurec = 1 / 4, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. The point A = A_I 2 is the intersection of the line B with the discriminant D_3. < g r a p h i c s > figurec = 1 / 4, the cusp, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. The point A = A_I 1 is the intersection of the line B with the discriminant D_3. §.§ The case c = 1 The condition is the same <ref> as for the two previous cases. < g r a p h i c s > figurec = 1, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. < g r a p h i c s > figurec = 1, the cusp, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. §.§ The case c > 1 Now we get a new condition: ( A ≤ 0 and B ≤ 0 ) or ( A ≥ 0 and B ≥ 0 and E ≥ 0 ) < g r a p h i c s > figurec = 4, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. The black line E allows to distinguish the 2 cases: 0 roots in the interval left of the line and the desired 2 roots in the interval right of the line. The line continues upwards to following figure with the cusp and passes there through the point A = A_T. < g r a p h i c s > figurec = 4, the cusp, roots of the cubic polynomial x^3 + a x^2 + b x + c, in red the number of roots in the interval [-1, 1]. The black line E allows to distinguish the 2 cases: 0 roots in the interval left of the line and the desired 2 roots in the interval right of the line. § THE CUBIC POLYNOMIALS WITH 0, 1 AND 3 ROOTS IN THE INTERVAL To treat these cases, the 5 quantities A, B, A_T, B_T, E defined in section <ref> are sufficient. The reader can find the conditions just looking at the figures and using other quadrants in the conditions. § NUMERICAL AND PLAUSIBILITY CHECKS The results in section <ref> were checked numerically with thousands of cubic polynomials. Rationals instead of floats for the coefficients were used. So it was possible, that the test also covered polynomials on the dark green lines, belonging to the equalities ≥, ≤, = in the conditions. The test was also designed to cover the cases with double roots D_3 = 0 and with c = 1. A plausibility check for the conditions: The following 2 maps generate a Kleinian 4-Group for c 0. The map M : x → - x leaves 2 roots in [-1, 1] in this interval The map N : x → 1 / x maps a root in [-1, 1] out of this interval, the other roots from outside into the interval. So N maps the problem "2 real roots in a closed interval" into the problem "1 root in an open interval". The corresponding 2 maps in the coefficient space are: M : a → - a, b → b, c → - c N : a → b / c, b → a / c, c → 1 / c Show how these maps N, M act on the conditions <ref> and <ref> and how they change a condition from true to false. Appendices § AN EXAMPLE FROM PHYSICS: THE LAGRANGE TOP See <cit.>, chapter 3.6 "The Heavy Symmetric Top" (3.66) (3.72) and (3.73) with the cubic polynomial. a = I_3 ω_3/I_1 b = p_Φ/I_1 α = 2 E'/I_1 β = 2 M g l/I_1 (1 - u ^ 2) (α - β u) - (b - a u) ^ 2 = 0 In the case of nutation the upper and lower limits for cos (θ) are given by 2 real roots of the cubic above in the closed interval [-1, 1]. The 3^rd real root lies outside this interval. With the conditions from section <ref> we get the following results. There should be no confusion with the a, b in <ref> and the coefficients of the monic cubic depending on the context. A = - (a - b) ^ 2 / β B = - (a + b) ^ 2 / β A_T = - 4 (b ^ 2 - α - β) / β B_T = - 4 (b ^ 2 - α + β) / β When b ± a the A, B in <ref> are A, B 0 and have the same sign. So the polynomial is located in the interior of the left or right quadrants (in the light gree shaded region not on the dark green lines on the boundary in the figures <ref> and ff.). The coefficient c of the monic cubic is c = (b ^ 2 - α) / β. Let the discriminant condition D_3 ≥ 0 be fulfilled. Case 1: if b ± a and -1 ≤ c ≤ +1 then 2 roots in [-1, 1]. There are 2 remaining cases: * | c | > 1, polynomials in the interior of the quadrants, the line E is needed * b = ± a, polynomials on the boundary of the quadrants with a root ± 1, the intersection points A_T, B_T are needed They are left for the reader as exercise. amsplain 10 Prod E. M. Prodanov, On the cubic equation with its Siebeck-Marden-Northshield triangle and the quartic equation with its tetrahedron. Journal of Computational Science 73 (2023) https://doi.org/10.1016/j.jocs.2023.102123 DOI: 10.1016/j.jocs.2023.102123 HaFi L. N. Hand and J. D. Finch, Analytical Mechanics. Cambridge University Press, 1998 https://www.damtp.cam.ac.uk/user/tong/dynamics/three.pdf Chapter 3. The Motion of Rigid Bodies
http://arxiv.org/abs/2407.02842v1
20240703063918
MindBench: A Comprehensive Benchmark for Mind Map Structure Recognition and Analysis
[ "Lei Chen", "Feng Yan", "Yujie Zhong", "Shaoxiang Chen", "Zequn Jie", "Lin Ma" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
Investigation of the Gamma-Ray Bursts prompt emission under the relativistically expanding fireball scenario Sunder Sahayanathan^+ July 8, 2024 ============================================================================================================ § ABSTRACT Multimodal Large Language Models (MLLM) have made significant progress in the field of document analysis. Despite this, existing benchmarks typically focus only on extracting text and simple layout information, neglecting the complex interactions between elements in structured documents such as mind maps and flowcharts. To address this issue, we introduce the new benchmark named MindBench, which not only includes meticulously constructed bilingual authentic or synthetic images, detailed annotations, evaluation metrics and baseline models, but also specifically designs five types of structured understanding and parsing tasks. These tasks include full parsing, partial parsing, position-related parsing, structured Visual Question Answering (VQA), and position-related VQA, covering key areas such as text recognition, spatial awareness, relationship discernment, and structured parsing. Extensive experimental results demonstrate the substantial potential and significant room for improvement in current models' ability to handle structured document information. We anticipate that the launch of MindBench will significantly advance research and application development in structured document analysis technology. MindBench is available at: <https://miasanlei.github.io/MindBench.github.io/>. ^† Corresponding authors. § INTRODUCTION The rise of Multimodal Large Language Models (MLLM)  <cit.> has marked the pivotal turning point in the development of artificial intelligence technology. These models, by integrating multiple modalities such as text, vision, and speech, have demonstrated exceptional capabilities in understanding and generating complex content  <cit.>, particularly in the field of document analysis  <cit.>, where they significantly enhance the accuracy of information extraction and content comprehension. However, the benchmarks currently used to evaluate these models often focus primarily on extracting text  <cit.> and simple layout information  <cit.>, such as positional relationships in tables  <cit.> and invoices  <cit.>, yet frequently overlook the complex interactions between elements in structured documents. This limitation in evaluation hinders our ability to fully understand and assess models in complex real-world scenarios. In structured documents, interactions between elements are not only manifested through semantics and positioning but also heavily depend on graphical elements such as arrows and brackets. Mind maps, as a common format, effectively organize and display complex information through their unique structures, making the integration and understanding of information more intuitive and efficient. With the advancements in software like XMind and MindManager, the demand for automated processing of these documents has continually increased. Concurrently, this has introduced new challenges to technology, where the tasks involve not only accurately identifying and parsing textual information but more crucially, recognizing the complex relationships between elements. Therefore, developing a comprehensive and practical benchmark for structured document analysis has become particularly urgent. Such a benchmark would not only thoroughly evaluate the performance of models but also inspire the research community to delve deeper into the complex issues of structured document analysis and seek corresponding solutions. To address the shortcomings of existing benchmarks, this paper introduces a new benchmark called MindBench, specifically designed for the structural analysis and parsing of mind maps. We construct a bilingual dataset of mind maps with high-resolution images, rich document content, and diverse structural variations by parsing the source files of real mind maps and automatically synthesizing simulated mind maps. Based on this dataset, we meticulously design five structured understanding and parsing tasks, as illustrated in Fig. <ref>, including full parsing, partial parsing, position-related parsing, structured visual question answering (VQA), and position-related VQA. These tasks comprehensively assess the models' abilities to parse text and image information, recognize relationships between elements, and understand the overall structure. Additionally, we establish specific evaluation metrics, including field-level F1 scores  <cit.> and Tree Edit Distance (TED)-based accuracy  <cit.> for parsing tasks, and F1 scores for VQA tasks. Extensive experimental results indicate that there is significant room for improvement in current models, particularly in processing high-resolution complex graphical images and handling lengthy structured document information. This benchmark is expected to significantly advance research and application development in this field. Our main contributions are as follows: 1. We propose a new benchmark, MindBench, which to our knowledge, is the first benchmark specifically for the analysis of structured documents. 2. This benchmark includes a vast collection of structured document images and corresponding annotation data, along with accompanying evaluation metrics, providing a standardized tool for research in this area. 3. Utilizing this dataset, we train and test several leading models related to this field. The results show that although there has been progress in handling high-resolution complex graphical images and lengthy structured document information, there is still significant potential for improvement. § RELATED WORK Visual Document Understanding (VDU) aims to comprehend text-rich images covering a wide range of types including documents <cit.>, tables <cit.>, charts <cit.>, natural images <cit.>, and screenshots <cit.>. The tasks of VDU are diverse, encompassing visual question answering <cit.>, image captioning <cit.>, information extraction <cit.> and natural language inference <cit.>. However, the tasks of extraction and understanding for complex structured documents, such as mind maps, have not been taken into consideration. Models designed for VDU can be broadly categorized into two types: OCR-model-driven methods and OCR-free methods. OCR-model-driven methods <cit.> use the models to integrate visual data with detected text and layout information from off-the-shelf OCR models. OCR-free methods <cit.> learn text-layout recognition with a high-resolution image encoder in an end-to-end manner. Both of these VDU methods require fine-tuning for specific tasks. Multimodal Large Language Models (MLLM) have recently been developed for general visual language understanding <cit.>, leveraging the powerful language comprehension and general capabilities of Large Language Models (LLM) <cit.>. These approaches utilize a common architectural paradigm that connects a visual encoder, e.g., ViT <cit.> to a Large Language Model through a visual-to-text module, e.g., linear layers <cit.> or a Q-Former <cit.>/Resampler <cit.>/Abstractor <cit.> with learnable queries. To facilitate the comprehension of text-rich images by MLLMs, several research efforts <cit.> have explicitly conducted tuning instructions on visual text understanding datasets. To handle high-resolution document images, some methods <cit.> employ shape-adaptive cropping modules to segment images into resolutions suitable for ViT models. Additionally, to enhance the understanding of document text and structured information, various tasks such as text reading <cit.>, text grounding <cit.>, and table parsing <cit.> have been designed. However, these tasks primarily focus on learning text recognition and simple layout information, overlooking the complex interactions among elements in structured documents. In this paper, we introduce a comprehensive benchmark called MindBench for structured document parsing and understanding. This benchmark allows for the evaluation of various capabilities of existing models, encompassing document text recognition, layout information perception, and complex interaction understanding. § THE MINDBENCH DATASET §.§ Data Generation Data preparation. Given the limited availability of labeled mind map data online, we synthesize additional mind map data using a multi-step process. Firstly, we randomly sample textual content of the nodes. Then, we generate mind maps in various shapes by randomly sampling the number of nodes, node children, and depths. These structured mind maps are then rendered into images using the Graphviz tool. To ensure diversity, we incorporate various layout engines and a wide range of properties for nodes and edges. Furthermore, we randomly place 0 to multiple background images and apply Gaussian noise to bring background diversity. The synthetic examples are shown in Fig. <ref>. While synthesizing data, we also recognize the importance of validating the models on real-world data. Hence, we make efforts to download a limited number of mind map source files from open-source mind map websites, including XMind[<https://xmind.app/share/>,<https://xmind.cn/mindmaps-gallery/>], Biggerplate[<https://www.biggerplate.com/mindmap-library>], and Zhixi[<https://www.zhixi.com/space#src=btn>]. Data parsing. In order to obtain unified structured annotations for training and evaluation, we parse the raw files of two types of data, preserving the textual and structural information while removing redundant information. Fig. <ref> illustrates the parsing process for the crawled data. First, we use the XMind software to automate the export of PNG images and HTML tag files of the source files. The HTML file contains structured information about the mind map. Then, we employ BeautifulSoup to parse the HTML, maintaining the tree structure and relationships among nodes, and convert the mind map into a nested JSON format. In the JSON structure, the node's children were represented as a list, allowing for nested nodes. For training, we convert the JSON data into a token sequence, ensuring reversibility by adding hierarchical sequence numbers to nested nodes. To avoid confusion with existing special tokens in MLLMs, we prefix all attribute names with `s_'. For the synthetic data, we directly convert the generated tree structure of the mind map into a token sequence, ensuring consistency with the labeling format of the crawled data. §.§ Task Definition Fig. <ref> illustrates five OCR-free tasks we designed, focusing on mind map structure parsing and understanding, which are elaborated in the following: Full parsing. As indicated by the red rectangle in Fig. <ref>, the task requires the model to return the full parsing results of the input mind map image, specifically the final token sequence discussed in the previous subsection. Mind map images, as depicted in Fig. <ref>, often have significantly higher resolutions than typical document images, with some exceeding 10,000 pixels. This demands models capable of processing high-resolution images. However, most existing MLLMs handle only up to 1000 pixels, and even advanced models <cit.> supporting up to 4k pixels struggle to clearly display text in many nodes. Furthermore, higher resolution mind maps contain more information, resulting in longer structured data, which presents a significant challenge for existing models. We utilize all crawled data and the majority of the synthetic data to perform this task. Part parsing. This task involves returning a subgraph centered around a specific node, resulting in shorter token output. This can alleviate pressure on models that struggle with insufficient processing length. However, it also poses new challenges, requiring the model to accurately identify the central theme node from the question and return its corresponding subgraph based on a thorough understanding of the mind map structure. Additionally, this task addresses the tendency of models to parse from the beginning, similar to the rationale behind continue reading task. However, this task does not provide preceding texts but prompts only with the theme name, posing a greater challenge. Position-related parsing. Similar to part parsing, this task also returns a subgraph of the mind map. The difference is that this task emphasizes spatial positioning, requiring the model to integrate capabilities in text recognition, spatial awareness, and relational parsing. Since the crawled data's exported HTML lacks coordinate information, this task is conducted on synthetic data, where we can extract the bounding boxes of each node from Graphviz source files. As in previous works <cit.>, we describe the bounding box as “<bbox>x1,y1,x2,y2</bbox>”, normalizing the coordinates to integers between 0 and 999. Structured VQA. Besides the parsing tasks, we design multiple VQA sets to enable explicit learning of the components of mind maps and their interrelationships. For instance, we craft prompts such as, “Describe the central theme of the mind map.” Typically, the central theme of a conventional mind map is easily identifiable, often located at the center or along the middle of an edge. However, in some layouts, such as the image in Fig. <ref>, identifying the central theme is challenging. An initial misprediction of the central node can lead to subsequent structural confusion and parsing failures. Thus, explicitly retrieving the central theme is crucial. We also design VQA tasks related to node kinship and hierarchical relationships, with specific prompts provided in Appendix  B. Position-related VQA. We design two types of position-related VQA tasks: recognition and grounding. In recognition tasks, the model receives node coordinates and returns answers about structural information. For example, the instruction “How many nodes are contained within the bounding box <bbox>[content]</bbox>?” requires the model for both localization and counting capabilities. In grounding tasks, the model receives node descriptions and returns the bounding box coordinates of the corresponding structure. For example, “Return the bounding box of the subgraph with the theme '[content]'.” The model needs to identify the central theme mentioned in the instruction, understand the positional relationships with its descendant nodes, and return the coordinates of the entire subgraph. The coordinates of the subgraph are represented by the minimum and maximum coordinates of all nodes within it. More position-related VQA prompts can be found in Appendix  B. Overall, the proposed five tasks are designed to enhance model comprehensive capabilities in text detection, relationship recognition, spatial awareness, and structure parsing. §.§ Statistic Dataset Splits. Table <ref> displays the data downloaded from multiple websites, segmented into training and testing sets. To accurately assess the model's ability to handle mind maps of varying complexities, we select a subset of simpler data (test^*) from the test set based on the crucial metric of node number, which serves as our default validation set. Our research indicates that using large mind maps with a higher number of nodes during the training phase greatly benefits structural parsing learning; therefore, our training set encompasses data of various complexities. Table <ref> lists the volume of synthetic data used for each task, with the key full parsing task utilizing a larger number of samples, and all synthetic data evenly distributed between English and Chinese. It should be noted that due to the non-uniqueness of node content and the absence of coordinate information in the crawled data, we primarily use this data for full parsing tasks to ensure high data quality. In the future, part parsing and VQA tasks could also consider utilizing this data for further research. Resolution. The sizes of images are crucial for model processing capabilities, hence we conduct a detailed analysis of the resolution distribution of the crawled data. As depicted in Fig. <ref>, we present the length of the longest side of images from various sources alongside their corresponding numbers. Among these, BXMind and BMManager feature relatively low resolutions, typically ranging from 1000 to 3000 pixels, while the resolution distribution of XMind exhibits a normal distribution pattern. Notably, Zhixi has higher resolutions, usually between 7000 to 8000 pixels, posing significant challenges to existing MLLMs: when these high-resolution images are scaled down to the input resolutions of the models, the texts often become illegible. As for the synthetic data, its resolution is influenced by the layout engine and the number of nodes. During synthesis, we uniformly sample these two parameters to ensure a consistent resolution distribution across all tasks. Token length. Token length is another crucial metric determining the processing capabilities of models. As illustrated in Fig. <ref> and Fig. <ref>, we conduct a detailed analysis of the token length distribution in both crawled and synthetic data. In the crawled data, the token lengths exhibit a long-tail distribution, particularly in samples from Zhixi, where many samples exceed 5000 tokens. This poses a challenge to existing MLLMs, as these models typically have a maximum processing length limited to 4096 tokens, including visual tokens. In the synthetic data, the token count for VQA responses usually falls below 100 tokens. Compared to full parsing, the token lengths for part parsing and position-related parsing are shorter. Additionally, the token length distribution in synthetic data is more uniform, with fewer extremes compared to the crawled data. Structure. To fully understand the structural distribution, Table <ref> provides detailed information on the number of nodes and depth across different datasets, with XMind and Zhixi exhibiting higher structural complexity, aligning with their resolution distributions. § EXPERIMENTS §.§ Experimental Setup Model. We evaluate several visual document understanding models <cit.> on the proposed benchmark. The criteria to select a baseline model are as follows: models are pre-trained on an extensive corpus of OCR and document data, they can possess a sufficiently high input resolution, and is capable of handling documents of substantial length. For implementation details of each model, please consult the respective original publications. In this paper, all models use unified structure learning and perform different tasks depending on the prompt. Due to the limited quantity of the crawled data, it is up-sampled 10 times during training to balance the quantity between the two data types. Table <ref> provides the comparison of model settings. We employ GPT4V <cit.> for two-shot inference to examine whether the existing commercial models have the capability of structural graphical parsing. We then utilize one domain-specific model Donut <cit.> and three large document models <cit.> for SFT on our dataset. The training details, largely in line with the original paper, can be found in Appendix  A. Metric. For parsing task, following Donut, we evaluate the models using two metrics: field-level F1 score and Tree Edit Distance (TED) based accuracy. We first convert the predicted token sequence to JSON format to recover the tree structure of the graph. The F1 metric flattens the nested JSON into a non-nested format, and then calculates F1 score at each field. F1 can efficiently evaluate the extracted field information, but it cannot exactly measure the structure of the tree. The TED-based metric is appropriate for evaluating tree-structured documents. Specifically, it uses the Zhang-Shasha (ZSS) algorithm <cit.> to calculate the nTED between the prediction tree and the answer tree, where n represents the size of the answer tree. The accuracy based on nTED is then computed using the formula max(1 - nTED, 0). For VQA task, we simply evaluate the models with F1 score. §.§ Comparison with SOTA MLLMs We conduct the performance comparison of existing visual document understanding models on the MindBench benchmark, as detailed in Table <ref>. GPT4V exhibits mediocre performance, indicating challenges for commercial models in parsing complex structured documents such as mind maps. Donut ranks second in parsing performance, significantly outperforming UReader and TextMonkey, and closely approaching the performance of IXC2-4KHD. This underscores the advantages of domain-specific models for parsing tasks. Although MLLMs are versatile, their capability in structured document understanding is not yet exceptional. IXC2-4KHD delivers the best performance, likely due to extensive OCR data pre-training, higher resolution input, and the capability to handle longer token lengths. Additionally, we conduct evaluations on challenging test samples. There is a notable accuracy discrepancy between complex samples with over 60 nodes and simpler ones. This highlights that the capabilities of current MLLMs are still limited when it comes to analyzing complex mind maps, particularly in processing high-resolution complex graphical images and ultra-long structured document information. There is an urgent need for further improvement of MLLM technology. In Table <ref>, we compare the performance of UReader and IXC2-4KHD across five subtasks involving synthetic data. IXC2-4KHD consistently outperforms UReader in all tasks. Full parsing has notably lower accuracy than part or position-related parsing, indicating its greater complexity. Additionally, position-related tasks show consistently lower accuracy than other tasks within the same category, highlighting the challenges of integrating structured understanding with spatial perception. §.§ Ablation Study Unified structure learning. We conduct ablation experiments to analyze the impact of unified structure learning, as presented in Table <ref>. To expedite the experiments, we use half of the data for this ablation study. Initially, we fine-tune the UReader model on 50% of the crawled data and evaluate its performance on the XMind test set as well as the synthetic test set. Due to the disparity in graph style, the model struggles on the synthetic test set. Subsequently, we introduce the full parsing task with synthetic data during training, resulting in improvements on both the XMind and synthetic test sets. This indicates that incorporating synthetic datasets can significantly aid in parsing real mind maps, even in the presence of substantial style differences. Lastly, we integrate all tasks for unified structure learning. We train the model using 50% of the full parsing task data and 50% of other task data, maintaining the same total quantity of synthetic data as in the previous experiment. It can be observed that the model continues to show improvements on the XMind test set, highlighting the effectiveness of explicitly learning inter-node relationships and spatial information for comprehensive structure parsing. However, the model's performance slightly decreases on the synthetic test set, which may be attributed to the reduced quantity of synthetic data in the full parsing task. §.§ Qualitative Results We first investigate the structured parsing capability of existing MLLMs through zero-shot inference, as depicted in Fig. <ref>. It is evident that GPT4V exhibits superior parsing ability. However, when confronted with closely positioned nodes, it tends to assign child nodes to incorrect parent nodes. This behavior can be attributed to the model's inclination to rely on layout information rather than inter-node interactions for determining node relationships. On the other hand, IXC2-4KHD demonstrates weaker zero-shot parsing ability. While the model comprehends the markdown format in the prompt, it can only generate flat prediction results with incomplete texts. Next, we present the prediction results of UReader and IXC2-4KHD tuned on the MindBench, as depicted in Fig. <ref>. It is evident that IXC2-4KHD outperforms UReader across all four tasks, showcasing its strengths in comprehending node interactions, spatial perception, and structure parsing. In Fig. <ref>, IXC2-4KHD can successfully correlate spatial information with subgraph structure; however, it still faces challenges in parsing details, such as recognizing small text and accurately determining parent-child relationships. § CONCLUSION In this paper, we introduce MindBench, the first comprehensive benchmark designed for structured document. MindBench stands out due to two primary features: 1) abundant structured document images with detailed annotations and evaluation metrics, providing a standardized research tool; 2) unified structure learning of five mind map understanding and parsing tasks that comprehensively assess the model's ability to text recognition, spatial awareness, relationship discernment, and structured parsing. We empirically investigate multiple visual document understanding baseline methods on the MindBench dataset. Experimental results demonstrate that there is significant room for improvement in current models' performance, particularly in handling high-resolution complex images and processing lengthy structured documents. Future work. This paper primarily focuses on establishing a benchmark for structured document parsing of mind maps. Although the data sources include various styles such as tables, relationship diagrams, and posters, mind map data predominates. In the future, we aim to expand structured document parsing to encompass a wider range of graphical types, enabling the understanding of information in any graphical document. We extend our heartfelt appreciation to XMind, Biggerplate, and Zhixi website for providing open-source mind map data, which played a crucial role in organizing this dataset. plain
http://arxiv.org/abs/2407.01781v1
20240701202033
fVDB: A Deep-Learning Framework for Sparse, Large-Scale, and High-Performance Spatial Intelligence
[ "Francis Williams", "Jiahui Huang", "Jonathan Swartz", "Gergely Klár", "Vijay Thakkar", "Matthew Cong", "Xuanchi Ren", "Ruilong Li", "Clement Fuji-Tsang", "Sanja Fidler", "Eftychios Sifakis", "Ken Museth" ]
cs.CV
[ "cs.CV", "cs.GR", "cs.LG" ]
0000-0003-2189-881X fwilliams@nvidia.com NVIDIA Research USA 0000-0003-2189-881X jiahuih@nvidia.com NVIDIA Research USA 0000-0001-6320-2636 jswartz@nvidia.com NVIDIA Research New Zealand 0000-0002-4569-5956 gklar@nvidia.com NVIDIA Research New Zealand 0000-0002-6158-6127 vithakkar@nvidia.com NVIDIA Research USA 0000-0003-2956-2050 mcong@nvidia.com NVIDIA Research USA 0000-0001-6376-7100 xuanchir@nvidia.com NVIDIA Research Canada 0009-0005-7426-7650 ruilongl@nvidia.com NVIDIA Research USA 0009-0002-0998-1581 cfujitsang@nvidia.com NVIDIA Research Canada 0000-0003-1040-3260 sfidler@nvidia.com NVIDIA Research Canada 0000-0001-5608-3085 University of Wisconsin-Madison USA sifakis@cs.wisc.edu NVIDIA Research USA esifakis@nvidia.com 0000-0002-9926-780X kmuseth@nvidia.com NVIDIA Research USA < g r a p h i c s > is an integrated Deep Learning framework for large-scale, and high-performance spatial intelligence. It can process 3D data from a broad range of sources, including voxels, point clouds, and surface meshes. also offers a rich set of state-of-the art differentiable operators, which can be used to build Deep Learning architectures for tasks in 3D Deep Learning, thus facilitating DL applications on large scale and high-resolution 3D data. § ABSTRACT We present , a novel GPU-optimized framework for deep learning on large-scale 3D data. provides a complete set of differentiable primitives to build deep learning architectures for common tasks in 3D learning such as convolution, pooling, attention, ray-tracing, meshing, etc. simultaneously provides a much larger feature set (primitives and operators) than established frameworks with no loss in efficiency: our operators match or exceed the performance of other frameworks with narrower scope. Furthermore, can process datasets with much larger footprint and spatial resolution than prior works, while providing a competitive memory footprint on small inputs. To achieve this combination of versatility and performance, relies on a single novel VDB index grid acceleration structure paired with several key innovations including GPU accelerated sparse grid construction, convolution using tensorcores, fast ray tracing kernels using a Hierarchical Digital Differential Analyzer algorithm (HDDA), and jagged tensors. Our framework is fully integrated with PyTorch enabling interoperability with existing pipelines, and we demonstrate its effectiveness on a number of representative tasks such as large-scale point-cloud segmentation, high resolution 3D generative modeling, unbounded scale Neural Radiance Fields, and large-scale point cloud reconstruction. <ccs2012> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010187.10010197</concept_id> <concept_desc>Computing methodologies Spatial and physical reasoning</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Neural networks [500]Computing methodologies Spatial and physical reasoning : A Deep-Learning Framework for Sparse, Large-Scale, and High-Performance Spatial Intelligence Ken Museth July 8, 2024 ============================================================================================== § INTRODUCTION Deep Learning methods have been foundational to solving a wide variety of previously intractable problems in computer science. These include building agents capable of passing the Turing test, generating high quality images from text prompts, speech and audio synthesis, and perception for robotics to name a few. Underlying these innovations lies a rich software ecosystem of deep learning primitives (such as convolution, pooling, and attention) which can be composed to build neural networks such as transformers or convolutional networks. These primitives are exposed to the programmer through deep learning frameworks such as PyTorch <cit.>, JAX <cit.>, or TensorFlow <cit.>. In common frameworks, these primitives operate on dense tensors of data, which often encode 1D or 2D signals (text or images). In the case of tasks in 3D, dense tensors are fundamentally limited in size due to cubic scaling and memory constraints. Fortunately, 3D data is often sparse in nature, only requiring information to be encoded in a subset of the volume such as in the interior or near the surface of a shape. Thus, there has been an emergence of frameworks <cit.> which operate on sparse 3D tensors of data. Correspondingly, many recent works propose network architectures which can operate on sparse 3D data <cit.>. Past sparse 3D learning frameworks leverage hash tables as the primary data structure for mapping 3D integer coordinates to tensor data. Such a data structure works well for operators such as convolution and pooling, but the lack of spatial coherence of accesses makes it inefficient for operators such as sampling, splatting, and ray tracing without the use of auxiliary acceleration structures. Thus, past frameworks typically include a small number of operators such as convolution and pooling. However, we note that modern 3D learning tasks often involve a number of complex operators that must be combined together. For example, <cit.> performs image-to-3d generation by unprojecting image features to a dense volume, leveraging a dense and sparse convolutional network to produce a sparse volume of learned features, then differentiably meshing and rendering this volume to produce a textured shape. Such a pipeline requires a number of complex differentiable operators (ray tracing, splatting, convolution, pooling, attention, meshing, and rendering) which can operate on sparse grids of learnable features. Currently, such pipelines are built using bespoke operators which glue together different acceleration structures (e.g. hash tables, occupancy bit fields and meshes) from different libraries. In this paper, we present , a novel deep-learning framework for operating on sparse 3D tensors. Our framework provides a wide host of differentiable GPU accelerated 3D operators which can be easily composed to build complex 3D learning pipelines. Each of these operators delivers performance that is on par with or exceeding the performance of state-of-the-art operators from other frameworks which are much narrower in scope. Furthermore, is memory efficient and is capable of processing much larger inputs than existing alternatives. Table <ref> summarizes the features of in contrast to existing 3D learning frameworks. The key innovation that enables us to develop a flexible and composable framework while still achieving state-of-the-art performance is a new data structure derived from NanoVDB <cit.>, which we call IndexGrid. This is paired with a novel ecosystem of tools for grid construction and traversal (see Section <ref>), accelerated ray marching (see Section <ref>), and a novel data processing paradigm that unlocks aggressive optimizations in the application of stencil-based operators (convolution in Section <ref>). While incorporating algorithms originally used in hash grid methods, which can be trivially adapted to our VDB structure, we also introduce new design paradigms that fit naturally within our representation. Specifically, we design optional convolutional alternatives that leverage efficient construction of locally densified, windowed views into the sparse data on which data regularity and aggressive utilization of tensorcores enable exceptional compute efficiency. Our core contributions include: * The design and deployment of a comprehensive API for spatial intelligence, with necessary primitives to accommodate a wide spectrum of high-value 3D Machine Learning tasks. * A new sparse data structure, IndexGrid, derived from NanoVDB <cit.> but with a drastically re-imagined programming and execution model aimed to aggressively accelerate stencil-centric operations. * A collection of GPU-optimized fast operators (convolution, attention, raytracing, etc) built around the IndexGrid structure, engineered to specifically target high efficiency on spatially sparse data. * A new benchmark for sparse convolution that highlights different workloads in terms of sparsity pattern and feature depth. * Memory efficient algorithms which enable scaling to much larger inputs than prior works. * A demonstration of the applicability of our framework to a variety of end-to-end training and inference applications from a broad spectrum of 3D Deep Learning tasks. § RELATED WORK Sparse Voxel Data Structures for Deep Learning Sparse 3D voxel grids are a common representation for deep learning on 3D data. Many past works such as <cit.> use a hash table to encode a mapping between 3D integer coordinates and offsets into a tensor of features. Such a mapping enables on average O(1) lookup of arbitrary features, however accesses are not spatially coeherent. Furthermore, hash tables are not effective acceleration structures for operations such as ray marching since they are not a BVH. Another line of works <cit.> use octrees instead of a hash table. These preserve spatial coherence and can be ray-marched efficiently, but at the cost of O(log N) access, and can grow quite deep for high resolutions. In contrast, uses a fixed depth, shallow VDB <cit.> tree, which enables O(1) amortized reads and writes, and serves as an effective acceleration structure for a wide range of operations (See Table <ref>). VDB is a widely used data structure in computer graphics and simulation with several implementations including OpenVDB <cit.> and NanoVDB <cit.> which implements a subset of OpenVDB on the GPU. More recently, NeuralVDB <cit.> added neural compression on top of NanoVDB. Lastly, there are works that allow for the definition of custom, sparse volumetric data structures, such as the Taichi domain-specific language <cit.>, which provides a means to emit optimized, differentiable code, with emphasis on simulation tasks. In contrast, is a general purpose framework targeting spatial sparsity, providing a collection of primitives that are useful to build end-to-end deep learning applications. Deep Learning Frameworks Deep learning architectures are constructed by composing together a series of differentiable operators with trainable parameters and optimizing those parameters via minimizing a loss functional over a dataset. In order to enable research and development of deep learning architectures, a number of software framework with composable primitives have arisen in the past decade. The most commonly used frameworks include PyTorch <cit.>, TensorFlow <cit.>, JAX <cit.>, and Keras <cit.>. These libraries expose primitives for operating on dense tensors of data (such as images and audio signals). 3D Deep Learning Software 3D deep learning tasks often involve more complex primitives which operate on sparse tensors. Common libraries such as the Minkowski Engine <cit.>, TorchSparse <cit.>, and SpConv <cit.> add support for constructing sparse tensors with basic operations such as convolution and pooling. Other libraries such as NerfAcc <cit.>, PyTorch3D <cit.> and Kaolin <cit.> provide other graphics operators such as ray tracing using dense bitfields and octrees as well as operators for meshes and graphs. Our framework, unifies many of these operations under a single library, providing a broader set of features than past works using only a single, highly versatile novel VDB acceleration structure. Applications of Sparse 3D Learning Frameworks Frameworks for deep learning on sparse tensors have been used in a number of important applications in deep learning including Point Cloud Processing <cit.>, 3D reconstruction of geometry from point clouds and/or images  <cit.>, perception <cit.>, and, more recently, 3D generative modelling <cit.>. exposes the operators to perform all these tasks under a single library using only our IndexGrid VDB as an acceleration structure. Section <ref> shows some demonstrative applications of our framework to different tasks in 3D Deep Learning. § METHOD As the name suggests, is built on the VDB data structure <cit.>, which offers both compact storage and fast access to sparse 3D data. However, unlike previous adoptions of VDB, e.g. in OpenVDB<cit.> and NanoVDB <cit.>, we have developed novel techniques specifically for machine learning on the GPU. This includes indexed storage, fast grid construction on the GPU, hierarchical Digital Differential Analyzers (DDAs) <cit.> for accelerated GPU raymarching, and blocked computation, each of which will be discussed below. Many of these improvements build on NanoVDB, yet they are essential to the framework and play a critical role in enhancing the performance of our ML system. §.§ Background: VDB As a preamble, let's briefly summarize some of the main characteristics of the VDB data structure (see <cit.> for more details). At the core, VDB is a shallow 3D tree structure, with a hash table at the root level and a fixed hierarchy of dense child nodes with progressively decreasing block sizes. The default configuration in OpenVDB, and only configuration in NanoVDB, is three levels deep with the fan-out-factors 32, 16, and 8, i.e. node sizes from root to leaf cover 4096^3, 128^3, and 8^3 voxels respectively. This configuration is denoted [Map,5,4,3] in <cit.>, where the integers are log2 of the nodes fan-out-factors. The fact that VDB is shallow means that it supports fast random (coordinate-based) access to values. Furthermore, VDB allows for inverse tree-traversal, by means of node-caching, which in practice makes random-access O(1). However, despite these attractive properties of VDB we found that it had several shortcomings when naively attempting to use it for ML applications on the GPU. Specifically, ML applications require more flexibility in terms of supporting complex high-dimensional data types, and the ML computations, e.g. sparse convolution, on the GPU are typically bandwidth-limited, which means random-access operations should be limited and data should be reused (cached) as much as possible. §.§ VDB IndexGrids for ML Features By design, standard VDB encodes data, e.g.  or , directly into the tree structure, i.e. values and topology (sparsity pattern) are mixed. That is, the data types (typically templated) and their numerical values are intertwined (both in terms of code and actual memory layouts) with their spatial occupancy (topology) information, compactly represented with bit-masks. This is problematic when dealing with data of arbitrary type and dimension (ML features). It severely complicates code if each feature needs its own template specialization, and it is memory inefficient in cases when the sparsity (topology) is shared between multiple feature/data types. Ironically, VDB was originally designed to handle situations where both topology and values are dynamic, but in ML we often found that topology is fixed, whereas data (payload) change in terms of type, value and dimension.To overcome these inefficiencies we developed a completely new grid type in NanoVDB, dubbed IndexGrid, which effectively separates topology and values encoded in VDB trees. Whereas the core idea behind IndexGrid is arguably simple, its efficient implementation is not. The idea is for the tree to return keys in the form of indices into external linear arrays of values as opposed to the data values, as is the case of standard VDB. In other words, the IndexGrid exclusively encodes topology information that is used to access any number of types of data values that resides in “sidecars”, i.e. separate memory blocks. This seemingly trivial technique greatly simplifies code and allows for a single IndexGrid to be reused with multiple data (features), which amortizes the cost of encoding shared topology. There is another less obvious benefit to this IndexGrid, which is related to the fact that all nodes in VDB are fundamentally dense blocks, e.g. a leaf node traditionally encodes 8^3=512 values, regardless of the occupancy of the sparse data. A naive implementation of an IndexGrid indices all 512 leaf values, but there is a much more memory efficient version of the IndexGrid that only indices the sparse (denoted active) leaf values. This significantly reduces the memory footprints of the sparse data (features stored externally as sidecars) since it eliminates the need to explicitly store values in leaf nodes that represent background values (as opposed to inserted active values). We achieve this sparse (vs dense) indexing of active values with the following highly efficient code. [float, language=C++, caption=C++ code that computes sparse indices from coordinates.] class LeafNode uint64_t mOffset,mPrefixSum,mBitMask[8]; ... int off(int i,int j,int k)return (i 7)<<6|(j 7)<<3|k 7; uint64_t getValue(int i, int j, int k) int m = this->off(i, j, k), n = m >> 6; uint64_t w = mBitMask[n], mask = 1 << (m 63); if (w mask == 0) return 0;// index to background uint64_t sum = n– ? mPrefixSum >> (9*n) 511 : 0; return sum + mOffset + countOn(w (mask-1)); ; [float, language=C++, caption=C++ code that computes offsets in nodes from coordinates.] int lower::off(int i, int j, int k) auto a = [](int n)return (n 127) >> 3;; return a(i) << 8 | a(j) << 4 | a(k);// 0,1,..,16^3-1 ; int upper::off(int i, int j, int k) auto a = [](int n)return (n 4095) >> 7;; return a(i) << 10 | a(j) << 5 | a(k);// 0,1,..,32^3-1 ; In words, this compact code computes the linear offset from the signed coordinates to values stored in an external array, starting at . Specifically, m∈{0,511} is the linear index inside the leaf node, n∈{0,7} is the offset into the 64-bit array that indicates which of the dense 512 values are active,on. w is the 64-bit word in that contains , and masks out all higher bits in w, so as to only consider active states of values proceeding . Line 6 return a zero offset if maps to an inactive value, which corresponds to a unique background index. If w is not the first word in , then line 7 extracts the preceding active value count encoded in the 7*9 bits of as prefix sums of the first 7 64-bit words in (excluding last word). Finally line 8 computes the number of on bits in w, excluding any bits that comes after . Despite the apparent complexity of this code, it is very fast since it includes few (2) conditionals, and fast operations like bit and intrinsic function calls (e.g. ). Also, note that each leaf node in an IndexGrid only requires 80 bytes to encode all indices as opposed to over 4KB in , i.e. a memory reduction of over 50× relative to a naive indexing approach. As mentioned above, IndexGrid also introduces memory saving by reusing topology for multiple data and avoiding explicitly storing inactive, i.e. background, values, which is especially important for sparse data. §.§ GPU Accelerated IndexGrid Construction While shared topology information is efficiently handled with our new IndexGrid, there is still a need to dynamically change the sparsity patterns, during morphological dilation, which is essential when building Level-of-Detail (LOD) hierarchies for sparse CNNs. In OpenVDB, dynamic topology is handled with allocation on insertion on the CPU, whereas in standard NanoVDB the topology is assumed to be fixed on both the GPU and CPU. Thus, there is a need to develop new techniques for building IndexGrids on the GPU, in order to rapidly build grids with different topology. A high-level description of our novel algorithm that builds IndexGrids from coordinates is as follows: * Input: N signed voxel coordinates . * Define N 64-bit keys in Fig. <ref>: * Full radix sort of N keys a. * Run-Length-Encode N keys a. * For i,j,k in each run, M=0,1,…, define keys in Fig. <ref>: * Partial radix sort of keys, b, associated with run M. * Upper node count is number of runs, M, in a. * Lower node count is number of unique keys . * Leaf node count is number if unique keys . * Use node counts to allocate device memory in Fig <ref>. * Build NanoGrid using the following top-down steps: * Use a to register upper nodes into the root table. * Use to register lower nodes into its parent nodes. * Use to register leaf nodes into its parent nodes. * Use to register active voxels into . * Optionally add ML features as blind data in Fig <ref>. Note that despite the complexity of the build algorithm outlined above, it is fast since virtually all steps can be performed in parallel on the GPU, and high-performance implementation of both radix sort and run-length-encoding are available in CUDA's CUB library<cit.>. In fact, this build algorithm allow us to construct an IndexGrid from millions of voxel coordinates in a few milliseconds. §.§ Hierarchical DDA for fast Ray-Marching of VDB Efficient ray marching of our underlying data structure is essential for multiple tasks typical in 3D deep-learning workflows, including differentiable rendering, unprojecting image features into a 3D volume, depth computation, debug visualization, and final rendering. To this end we are using an acceleration technique, dubbed HDDA, that employs a hierarchy of Digital Differential Analyzers (DDAs), which accelerate ray marching on each of the tree levels of a VDB. While this technique was previously announced in a technical talk <cit.>, we reiterate the process with more detail and technical elaboration in this paper. The core idea of the HDDA is to associate four different DDAs with a given VDB tree structure – one for each of the node levels corresponding to the coordinate domains {4096^3, 128^3, 8^3, 1^3}. In other words, the first DDA rasterizes a ray at the granularity of the root's child nodes of size 4096^3 voxels, and the last (fourth) DDA rasterizes a ray at the fine voxel level. So, instead of slowly advancing the ray-marching at the voxel level, which would require numerous redundant random accesses into the VDB, we can use the coarser DDA in the hierarchy to effectively leapfrog through empty space. Given the fact that the VDB tree configuration is known at compile-time, we can use Template Meta-Programming to inline the logic of the four DDAs, resulting in a single high-performance HDDA. This significantly accelerates ray-marching and allows for real-time ray-tracing of VDB volumes on the GPU (typically marching millions of rays per second). We have illustrated this idea using two spatial dimensions in Fig. <ref>. Our benchmark demonstrates a runtime that is 1.5x to 3x faster than DDA in the dense bitfield and over 100x less memory footprint, as reported in  <ref>. §.§ Accelerated Sparse Convolutional Operators has been designed to be compatible with highly efficient algorithms for convolutional operations on sparse data, such as the Sorted Implicit Gemm (SpConv v2) paradigm used in TorchSparse++. We emphasize that leveraging such highly-tuned libraries in the context of our hierarchical, tree-based indexing structure is a straightforward exercise: is effectively a locality-optimizing mapping between a sparse collection of lattice indices and a one-dimensional, linear index space. Contrary to random hash-based maps, inherently provides the property that active indices that are geometrically proximate in the containing 3D lattice, will have high probability of also being proximate in linear index space. Conversely, active voxels corresponding to a contiguous sub-sequence of linear indices are highly likely to be geometrically clustered together in the containing 3D lattice. Other than this (favorable) inherent property of the indexing scheme, our data structure is drop-in compatible with implementations that originate from hash-based structures (e.g. SpConv v2) by simply treating the linear index of each active voxel as a “hash key” (but with built-in locality properties). We have incorporated SpConv v2 into our operator toolkit and, as our micro-benchmarks reveal, we at minimum match the efficiency of TorchSparse++ at the operator level within our framework. Even though SpConv v2 is trivially compatible with , we have identified a number of scenarios where a new design perspective on convolutional kernel design can provide even higher performance. Although we present the circumstances leading to this acceleration opportunity, and detail our proposed algorithmic design choices, we highlight that retains the ability to select the best applicable algorithm to match each case, including either the all-around performer SpConv v2, or our new kernels for those scenarios that warrant their use. Although we defer discussion of esoteric details of SpConv v2 to the related publications <cit.>, we highlight that its design is motivated by the following objectives: * Minimization of wasted computation, in the form of MACs (multiply-accumulate operations); relative to dense convolution, wasted computation could be either due to sparse occupancy of the background lattice, or sparse presence of the (max 27) stencil “spokes” across different lattice locations where a convolution stencil is applied. * Maximization of regularity of operations; this typically manifests as an aspiration to perform the largest structured GEMM operation afforded by data layout and sparsity pattern. * Minimization (or elimination) of scatter operations, and spatial localization of gather operations. These design objectives become much more difficult to reconcile in the presence of significant sparsity and geometric irregularity. Scenario 1: Low-depth convolutions (Leaf) The first scenario where approaches striving for economy of computation might face diminishing returns is when the kernel is severely memory-bound. This possibility can easily materialize in the case of a convolution where both the input and output feature dimension is relatively low (e.g. not exceeding 8-16). As a tangible example: consider a convolution at TF32/FP32 precision with activation dimension of 8, and output dimension of 16. A dense convolution at those depths, applied to an N^3 grid requires streaming at minimum 96N^3 bytes (assuming perfect caching), and the performance of 6912N^3 operations. On an RTX 6000 Ada Generation GPU (peak memory bandwidth of 960GB/s) this would require about 70TFLOP/s, which is an achievable compute density, to have this kernel be memory- rather than compute-bound. The calculus is not so straightforward when we contemplate sparsity, but we have practically witnessed this operation being pronouncedly memory-bound even at (local) sparsity of as little as 15-20%. This is due to the inefficiency of necessary gather operations, the cost of indirection for accessing low-depth feature vectors, and the overhead of indexing data structures themselves. Additionally, even compute efficiency may be challenging due to the complexity of harvesting large-enough GEMM operations when the contraction dimension (8, in this example) is so shallow. In light of this, we consider an alternative where we prioritize regularity over sparsity of computation, essentially tolerating a higher compute burden for the sake of more local structure. Specifically, we have implemented a kernel that performs local densification in GPU shared memory, at the level of an 8× 8× 8 leaf node, and performs a fully regular and (locally) dense convolution within this window. In detail, we allocate space in shared memory for a locally densified copy of the input activations in a window of size 10× 10× 10 stradding the leaf node, plus a one-voxel halo in its immediate neighborhood (a footprint of 31.25KB for 4-byte FP32/TF32 data, at feature width of 8). Likewise, the output of this operation is an 8× 8× 8 buffer of 16-wide output feature vectors (footprint of 32KB) also stored in shared memory. We subdivide the 8^3 local domain into 32 8× 2 × 1 subtiles, assign each of them to a warp (1024 total threads) and use 16× 16 × 8 WMMA tensorcore GEMMs (at TF32 precision with FP32 accumulate) within each warp to apply each of the 27 spokes of the stencil. Even though this paradigm clearly performs more computation than strictly necessary (foregoing sparity due to either voxel or stencil occupancy), the regularity of the computation in combination with the memory-bound nature of this scenario allows for superior performance (relative to our SpConv v2 default backend) in leaf nodes that have an occupancy of 20% or higher (all the way to an approximately 2.5x-3x advantage for a dense domain). It should also be noted that no auxiliary indexing structures are necessary for this kernel approach, all gather offsets are computed directly and efficiency from the (very lightweight) metadata of the core tree structure, taking advantage of amortization. Finally, due to the compact and local storage of all (output) feature vectors within a leaf node, the writeback of the convolution result into global memory occurs on a fully sequential memory range (all active indices within a leaf node are sequentially indexed). Scenario 2: High local occupancy convolutions (Brick) The second scenario we target for a tuned approach is when the sparsity pattern exhibits high density in the vicinity of active indices (e.g. when on average every active index has more than 70-80% of its stencil neighbors as active), even though the domain is macroscopically sparse. Typical cases where this scenario materializes is when the active indices are predominantly clustered in a narrow band of small but nontrivial thickness (e.g. 2-3 voxels wide), and also on dense or semi-dense domains that are still targeted with our representation. In addition, we look for instances where such topology is coincident with moderate-to-high depth of input/output features (width of 32 or higher), when the kernel no longer is memory-bound as in scenario 1 above. For this case, we have implemented a solution that replicates the local densification paradigm, as above, but instead of this being performed at the granularity of an 8× 8× 8 window, we focus on a kernel that monolithically produces the convolution output on a narrower 4× 2× 2 window. Input activations are fetched on-demand from the spatial extent encompassing the 6× 4× 4 window (including a 1-voxel halo) around the 4× 2× 2 block. We have developed a custom tensorcore implementation of the convolution operation using the CuTe library <cit.> that achieves exceptionally high compute density (exceeding 70% peak compute bandwidth for moderate feature depths of about 32-64, and reaching above 90% for feature depths of 128 or higher) for the task of computing the locally-dense convolution on the 4× 2× 2 output window. Any residual suboptimality in this case is due to inactive voxels at the scale of the 4× 2× 2 window, or stencil spokes that are not present for any of the active voxels. In practice, we have observed that for occupancy patterns that exceed 60-70% on average across such windows, this implementation outperforms SpConv v2, with the most notable margin observed in dense or semi-dense domains that have even higher average occupancy. Scenario 3: Highly sparse topology, high feature depth (LGGS) The last scenario where we have provided a custom implementation addresses the instance where the occupancy pattern is so sparse that on average every active index is expected to have no more than 4-5 active neighbors (out of 26 max). In addition, this has to be combined with relatively high feature depth, typically of 128 or above. This scenario is characteristic of LiDAR data, as those presented in SemanticKITTI <cit.>. Although our default SpConv v2 implementation performs an adequate job at minimizing wasted MAC operations, the number of those may still exceed the essential MACs mandated by the stencil occupancy of active indices. In principle, if our sole objective was to minimize wasted MACs, the traditional gather-GEMM-scatter paradigm provides a pathway to achieving this goal. However, the reasons why the straightforward implementation of this paradigm will typically underperform SpConv v2 is due to the need for several independent streaming passes over the input activations (one for each of the 27 stencil offsets), and due to the suboptimality of scattering results to global memory. We circumvent these concerns by taking the following steps: (a) We block the gather-GEMM-scatter operation so that it is performed on a contiguous subsequence of output indices from the data structure, typically 64 indices at a time. Due to the locality of the mapping, those indices are expected to correspond to highly clustered geometric coordinates from one or more IndexGrid leaf nodes. (b) Instead of scattering results to global memory, we use a temporary buffer in GPU shared memory as the destination of scatter operations on these 64 indices, which collect the contribution of each of the 27 stencil offsets within this block. At the end of the local computation, this result is sequentially copied back to global memory without the need of a scatter operation. (c) For each of the 27 stencil offsets, we collect all input/output index pairs that are linked by this offset (such that the output index is within the range of the block being processed), and pack them contiguously again in shared memory buffers. For each stencil offset, the input of this packed buffer is gathered from global memory (benefiting from locality across offsets). A GEMM operation is performed to produce the output, still in packed format, to be scattered (purely in shared memory) to the accumulation buffer that stores all 64 output vectors. We pad these packed collections of input/output index pairs to the next multiple of 16, for purposes of easy mapping to tensorcore-accelerated GEMM. This is the only source of wasted MACs, which is now limited to at most 15 MACs per block of 64 output indices (practically, the expected length of this padding is closer to 8 entries per 64 output indices). Our benchmarks demonstrate a runtime that is approximately 25% faster than SpConv v2 (at feature length 128 or higher) for the single-scan point clouds of SemanticKITTI. §.§ Framework Overview At its core, exposes a set of differentiable deep learning primitives which operate a minibatch of sparse voxel grids. a set of multiple sparse voxel grids where each voxel contains some multi-dimensional tensor of data. To encode such a minibatch of grids, employs two classes: a which represents a set of NanoVDB index grids (one per item in the batch) and a which encodes a tensor of per-voxel features at each voxel in the minibatch. Internally, a is simply a contiguous block of NanoVDB IndexGrids stored one after the other with some metadata to quickly access any grid in the batch. Below, we give a description of the and classes as well as a summary of the primary operators exposed to the programmer by . §.§.§ In general, we cannot expect each grid within a minibatch to have the same number of voxels. Thus, must expose operations on jagged arrays of data. exposes the class for this purpose. Conceptually a can be thought of a list of tensors [t_1, t_2, … t_B] where each tensor t_i has shape [N_i, *] each tensor has different first dimension but matches in subsequent dimensions. For example, if a represents per-voxel attributes in a batch of grids, then N_i will be the number of voxels in the i^th grid in the batch. Under the hood, efficiently encodes these tensors contiguously in memory to enable fast operators on them. Specifically, a consits of three parts: * which is a [N_1 + … + N_B, *]-shaped tensor equivalent to concatenating t_1, … t_B along their first axis * which is a [B, 2]-shaped tensor such that [i, :] is the start and end tensor t_i in * which is a [N_1 + … N_B]-shaped tensor such that [i] is the index (from 0 to B-1) of the i^th element in Figure <ref> shows this layout pictorially. Note that and are also available for since these represent a jagged collection of voxels. In the subsequent paragraphs, a tensor shape of -1 refers to a jagged dimension. For example, a containing the voxel coordinates of a would have shape [B, -1, 3]. §.§.§ List of Operators supports a range of differentiable operators on minibatches of sparse voxel grids of tensor data. These operators are written in CUDA and C++ and interoperate with PyTorch. Here we give a high-level description of the major operators in . A concise summary of these are given in Table <ref>. Grid Construction Operators A in can be created from a of point clouds; voxel (ijk) coordinates; triangle meshes (the set of voxels which intersect a mesh); other via padding, coarsening, or subdivision; and from dense grids with masks. Sampling Operators A common operator is to sample tensor values on a voxel grid at a set of query points Q ∈ 2^ℝ^3. provides differentiable sampling operators which accept a G, a of per-voxel features Z with shape [B, -1, *], and a of query points Q with shape [B, -1, 3]. These operators return a set of features Z_Q sampled at each point q ∈ Q using Trilinear or Bézier interpolation. Splatting Operators supports splatting data stored at points onto a grid using Trilinear or Bézier interpolation. These operators accept a G, a P of points, and a Z of per-point features. They produce a of features (one per voxel in G) by splatting the feature at each point onto the neighboring voxels. Convolution, Pooling, Upsampling, and Attention supports sparse convolution via a novel accelerated implementation (Section <ref>). The convolution operator accepts a G_in, a kernel K, and a of features Z_in and produces a G_out, and Z_out by performing sparse convolution. We further support average and max pooling operators on a and pair as well as an upsampling operator which upsamples a and of features via subdivision and nearest neighbor sampling. supports attention by calling out to Flash Attention <cit.> on a . Ray Marching comes with a number of operators for intersecting rays with grids. These include enumerating the set of voxels along a ray, parameterized by intervals of t along a ray which intersect a grid; finding the intersection between rays and the level set of an implicit function stored on a grid; and volume rendering. Ray marching operations are implemented using a hierarchical DDA algorithm outlined in Section <ref>. § EXPERIMENTS In this section, we demonstrate the effectiveness of through a series of benchmarks and qualitative examples of use cases. Our experiments demonstrate that our framework successfully covers a broad variety of use cases and operations, while achieving state-of-the-art runtime performance and memory efficiency. First, we perform micro-benchmarks of the most important operators in , comparing them against corresponding state-of-the-art operators in other sparse deep learning frameworks in terms of both memory usage and speed. Next, we run a macro-benchmark showing that remains performant in the real-world use case of training a sparse convolutional neural network (CNN). Finally, we demonstrate the utility of by showing its use in several key applications on high-resolution 3D data. These applications include 3D reconstruction from points, semantic completion, 3D shape generation, and neural radiance field rendering. §.§ Micro-benchmarks We evaluate the runtime performance and memory efficiency of the core primitive operations in , comparing against operators available in other frameworks. First, we compare the speed and memory footprint of our core algorithm for index grid construction, which converts a list of integer or point coordinates to a VDB IndexGrid on the GPU. All grid construction operations (from meshes) make use of this build algorithm, so this is a crucial benchmark. Second, we evaluate the performance of our HDDA ray marching algorithm, which is the backbone of all ray-tracing algorithms in the framework. Finally, we evaluate the performance of our convolution operator on a novel benchmark consisting of a variety of real-world examples spanning different sparsity patterns and channel depths. Each data-point for the experiments on grid construction and convolution, sections <ref> and <ref> respectively, were averaged from the 4 best runs out of 5 runs to mitigate outliers. Between each run we made sure to clear the device's L2 cache to make sure that no framework was benefiting from the uneven advantages of a warm cache. The experiment in sections <ref> was run on a machine with an AMD 7950X 16-Core CPU and GeForce RTX 4090 GPU, with 128GB of host memory and 24GB of device memory. The experiment in section <ref> was run on a machine with an AMD 3975WX 32-Core CPU and RTX 6000 Ada Generation GPU, with 128GB of host memory and 48 GB of device memory. The experiment on ray marching in section <ref> was performed by averaging the results of 1,000 runs where each run consisted of casting 1,024 rays. This experiment was run on a machine with an AMD 3975WX 32-Core CPU and GeForce RTX 3090 Ti GPU, with 128GB of host memory and 24GB of device memory. §.§.§ IndexGrid Construction The IndexGrid construction algorithm, detailed in Section <ref>, converts a list of integer or point coordinates into a VDB IndexGrid on the GPU. It forms the backbone of all grid constructions in , while also acting as a means to initialize sparse grids. We evaluate the runtime performance and memory footprint of our grid construction algorithm against those in TorchSparse++ <cit.>, Minkowski Engine <cit.>, and spconv <cit.> by constructing a grid with random points sampled from a normal distribution. Figure <ref> shows the maximum memory usage and runtime when constructing a grid from an increasing number of input points. Our method is comparable to baselines in terms of runtime performance while offering significant advantages in terms of memory efficiency. We remark that the three baseline approaches run out of memory long before ours. Thus, can process much larger input data than current state-of-the-art sparse DL frameworks. §.§.§ Hierarchical DDA We profile our HDDA ray marching on a 3-voxel-wide narrow-band level set of the Stanford bunny extracted at various (effective) resolutions ranging from 32^3 to 1024^3. The ray marching axis-aligned bounding box of the bunny is 1.2x of its tight axis-aligned bounding box and all rays are always marched through the entire volume constructing intervals along the ray. We compare our algorithm with the widely used NerfAcc <cit.> library (e.g. by NeRFStudio <cit.>) for ray marching and volume rendering. NerfAcc provides a highly optimized DDA over a dense binary grid implemented in CUDA. Table <ref> shows that constantly achieves 1.5x to 3x faster runtimes than NerfAcc while maintaining a comparable or lower (up to 100x at high resolutions) memory footprint. The same conclusion applies to the real-world scene as well, where in the large-scale NeRF application (<ref>) we observe 1.3x faster ray marching with comparing to NerfAcc, and 30x less memory footprint at effective 1024^3 resolution on the Laguna Seca Raceway scene. §.§.§ Sparse Convolution We profile our core convolution operators across a range of different feature depths: A low-depth regime with input depth of 8 and output depth of 16, a medium depth case with input and output depths of 32, and a high-depth scenario with input and output depths of 128. Orthogonal to feature depth, we examine three different degrees of sparsity: * a highly sparse regime leading to voxel occupancy (at the IndexGrid leaf node level) below 20%, harvested from typical single-scan LiDAR datasets of rasterized point clouds <cit.> * a case of moderate leaf node-level occupancy of 20-40%, originating from rasterized surfaces, and * a case of higher density stemming from rasterization of volumetric data with nontrivial codimensional thickness, with leaf node-level occupancy in excess of 40% The performance plots in Figure <ref> include four implementations available in our framework: * an adaptation of SpConv v2 (labeled IGEMM) that employs our tree-derived indexing scheme instead of a spatial hash * local densification at the leaf-node level (Scenario 1 in Section <ref>; labeled Leaf in the figure) * local densification at a 4× 2× 2 “brick” (Scenario 2 in Section <ref>; labeled Brick in the figure) * the shared-memory Local Gather-GEMM-Scatter paradigm of scenario 3 in Section <ref> (labeled LGGS in the figure); this last option is only leveraged for high-depth convolution operations As can be surmised from Figure <ref>, these four approaches allow us to select an operator implementation that is the most competitive to alternatives (i.e. those not incorporated as possible backends in ) in each case. We note that in our experiments, optimizations beyond the IGEMM baseline were deployed when appropriate as part of the inference pipeline only; for training we defaulted to the IGEMM option for simplicity and as to avoid further specialization of the gradient computation for the filter coefficients. Our benchmark also indicates the TFLOPS achieved by the top performer in each instance. This is an “effective” TFLOPs figure that reflects the method's degree of success in leveraging both spatial sparsity, and stencil sparsity (e.g. avoiding, to the degree possible, unnecessary multiply-and-accumulate (MAC) operations for stencil weights that are absent at specific grid locations). We compute this “effective TFLOPS” figure by counting the bare minimum number of operations essential for the stencil application, excluding from this count operations that would be associated with null weights. These numbers should be contrasted with the architectural ceiling of 73TFLOPS (or 82.6TFLOPS with a boost clock) on the RTX 4090 platform used in these experiments. §.§ Macro-benchmarks §.§.§ Full Network Inference We benchmark the end-to-end performance of -based network inference. To this end, we leverage the generative backbone of XCube from <cit.>. Such a backbone has a typical encoder-decoder structure and is representative for sparse U-Net designs by first applying a set of downsampling operations to reduce spatial resolution and then upsampling to the original scale. Our dataset is based on a voxelized version of the KartonCity <cit.> dataset containing 500 representative samples, where we uniformly pick spatial resolutions from 256, 512, and 1024. This dataset contains dense geometry of a synthetic city that is suitable for generative tasks. Detailed speed comparison on different configurations of the network are shown in Figure <ref>. We consistently perform better than the state-of-the-art baselines under different spatial resolutions and channel sizes. Our results were averaged from the 4 best runs out of 5 runs to mitigate outliers. Between each run we made sure to clear the device's L2 cache to make sure that no framework was benefiting from the uneven advantages of a warm cache. The experiment was run on a machine with an AMD 7950X 16-Core CPU and GeForce RTX 4090 GPU, with 128GB of host memory and 24GB of device memory. §.§.§ Neural Radiance Fields We run the full end-to-end neural radiance fields training and testing session based on a reference implementation of Instant-NGP (iNGP) <cit.>. In order to query the color of a sampled ray, one would first perform ray marching through the scene to obtain samples close to the scene surface. The features at the sample positions are then retrieved and volume rendered to aggregate the final color. In <cit.>, a cascade of binary grids of varying voxel sizes is used to represent the rough sparsity of the scene. By replacing the cascaded grid structure with the grid representation, we can accelerate the process of ray marching using the HDDA algorithm as introduced, while benefiting from the modest memory consumption provided by the VDB data structure. We run the neural radiance fields on a GeForce RTX 4090 GPU on one scene in the Waymo Open Dataset <cit.>. The training speed of ours compared to iNGP is 26.1it/s vs. 26.4it/s, while the inference speed of ours compared to iNGP is 1.90FPS vs 1.62FPS. As is initialized from LiDAR point clouds and offers more precise locations of the samples, we reached a test PSNR of 27.07, in comparison to 25.89 for iNGP. §.§ Example Applications We demonstrate that is a practical tool for building real-world 3D deep learning applications. Here we present several applications of , some of which are reimplementations of published works. These include large scale surface reconstruction from point clouds using NKSR <cit.>, high resolution hierarchical object and scene generation using XCube <cit.>, large-scale Neural Radiance Fields, and Deep-Learning based simulation super-resolution. §.§.§ Large-scale Surface Reconstruction NKSR <cit.> uses a sparse voxel hierarchy to encode a neural field of features which are used to perform a learned kernel ridge regression to solve a variational surface reconstruction problem from oriented point clouds. NKSR achieves state-of-the-art reconstruction and generalization results. We fully re-implemented NKSR using replacing the convnet with our implementation, the meshing with our marching cubes implementation, and implementing a batched Kernel Ridge Regression solver as an C++ extension. We remark that this extension is a single file consisting of a few hundred lines of code which only depends on PyTorch and . Figure <ref> shows a mesh reconstructed using our implementation from 350 million input points. This reconstruction took 2 minutes on 8 V100 GPUs. §.§.§ 3D Generative Models =-1 We used to re-implement XCube <cit.>, a 3D generative model for high-resolution voxel hierarchies of objects and scenes. XCube benefits directly when using to enable it to train on datasets with substantially larger footprints and higher spatial resolution while consuming less GPU memory. With the support of , XCube can be scaled up to spatial scale of 100m × 100m at 10cm resolution. Figure <ref> demonstrates unconditional generation of high-resolution 3D objects trained using the Objaverse <cit.> dataset and large-scale outdoor scenes trained on the Waymo <cit.> dataset. §.§.§ Large-scale Neural Radiance Fields can be used to support large-scale Neural Radiance Fields by providing a memory efficient acceleration structure for spatial skipped ray marching. Figure <ref> provides two showcases of this application including a 1km squared area capture of the Laguna Seca Raceway and a standard Garden scene in the NeRF literature from Mip-NeRF 360 dataset <cit.>. §.§.§ Simulation Super-Resolution can enable novel applications of super-resolution techniques to inherently sparse, 3D data such as those produced by physical simulations which operate in unbounded domains. Previous approaches can be memory constrained and computationally prohibitive for large domains if approached with dense data structures and operators. Figure <ref> shows preliminary results of work we are currently undertaking which trains fully convolutional super-resolution networks such as DCSRN <cit.> and 3D-FSRCNN <cit.> with operators implemented in . Currently in development are super-resolution models for several simulation domains including muscle and skin dynamics as well as fluid simulations. § CONCLUSION AND FUTURE WORK We presented , a novel GPU-optimized framework for deep learning on large-scale 3D data. Our framework includes a broad set of novel differential primitives which can be used to build deep-learning pipelines for a wide variety of 3D tasks. These primitives include GPU accelerated grid building, ray marching, convolution, sampling, splatting, etc. Furthermore, has a significantly more comprehensive suite of features than existing frameworks, runtime performance that is at-par or superior to state-of-the-art and memory efficiency that exceeds state-of-the-art by a large margin. uses a single, novel VDB IndexGrid data structure to accelerate all operations, making it composable and easily extensible. We demonstrated the effectiveness of via extensive quantitative benchmarks and qualitative demonstrations on real-world 3D learning use cases, showing that enables high-performance deep learning on large scale 3D data. In the future, we plan to extend with more differentiable operators such as hierarchical dual marching cubes, and particle/blob to grid conversion functions (for differentiable physics and particle rendering Gaussian Splatting <cit.>). We further plan to develop a high level utility library of neural network architectures for common tasks that can be used off-the-shelf for downstream applications. Beyond new features, an exciting avenue of future work which can lead to even greater sparse convolution performance is to dispatch the optimal kernel on a per-leaf basis depending on local sparsity pattern. Finally, we plan to release the code for as open-source software expeditiously following publication. ACM-Reference-Format
http://arxiv.org/abs/2407.03126v1
20240703140743
Game-Theoretic Protection Adoption Against Networked SIS Epidemics
[ "Abhisek Satapathi", "Ashish R. Hota" ]
eess.SY
[ "eess.SY", "cs.GT", "cs.SI", "cs.SY" ]
§ ABSTRACT In this paper, we investigate game-theoretic strategies for containing spreading processes on large-scale networks. Specifically, we consider the class of networked susceptible-infected-susceptible (SIS) epidemics where a large population of agents strategically choose whether to adopt partially effective protection. We define the utilities of the agents which depends on the degree of the agent, its individual infection status and action, as well as the the overall prevalence of the epidemic and strategy profile of the entire population. We further present the coupled dynamics of epidemic evolution as well as strategy update which is assumed to follow the replicator dynamics. By relying on timescale separation arguments, we first derive the optimal strategy of protection adoption by the agents for a given epidemic state, and then present the reduced epidemic dynamics. The existence and uniqueness of endemic equilibrium is rigorously characterized and forms the main result of this paper. Finally, we present extensive numerical results to highlight the impacts of heterogeneous node degrees, infection rates, cost of protection adoption, and effectiveness of protection on the epidemic prevalence at the equilibrium. Spreading processes, susceptible-infected-susceptible epidemic, game theory, equilibrium, large-scale networks. Game-Theoretic Protection Adoption Against Networked SIS Epidemics Abhisek Satapathi and Ashish R. Hota, Senior Member, IEEE A. Satapathi and A. R. Hota are with the Department of Electrical Engineering, Indian Institute of Technology (IIT) Kharagpur, West Bengal, India, 721302. E-mail: abhisek.ee@iitkgp.ac.in, ahota@ee.iitkgp.ac.in ========================================================================================================================================================================================================================================================================================== § INTRODUCTION Effective containment of spreading processes, such as infectious diseases spreading in society <cit.>, opinions spreading via social networks <cit.> and viruses spreading on computer networks <cit.>, has proven to be challenging for two main reasons. First, deploying centralized control strategies is often impractical due to the large-scale nature of the networked system, and hence, decentralized strategies need to be developed <cit.>. Second, the entities present in the network are often heterogeneous in terms of their connectivity patterns <cit.>. In order to address the above challenges, we propose a game-theoretic model where a large number of heterogeneous agents strategically choose to adopt (partially effective) protection measures against the class of Susceptible-Infected-Susceptible (SIS) epidemics on networks. While mathematical modeling and analysis of epidemics has a long history <cit.>, the recent COVID-19 pandemic has led to renewed interest in this topic. In particular, a substantial body of recent work has examined the impacts of decentralized containment strategies against epidemics in the framework of game theory; see <cit.> for a recent comprehensive review. Two broad classes of containment strategies have been examined using game theory. In the first line of work, the decision-makers or agents adopt vaccination against the disease <cit.> by evaluating the trade-off between the cost of vaccine and probability of being infected in the steady-state of the SIS epidemic dynamics. While earlier works (such as <cit.>) considered a homogeneous population of agents, later works (such as <cit.>) considered agents with heterogeneous degrees. In the second line of work, agents adopt protection measures, such as wearing masks, social distancing, etc., against the epidemic. While vaccination is typically a one-time irreversible decision, protection adoption is often reversible, and the agents can revise their action or strategy dynamically as the epidemic builds up or wanes. As a result, the evolution of the epidemic and the actions of the agents often evolve in a comparable time-scale. Accordingly, several recent papers, including <cit.>, have analyzed the dynamics of coupled evolution of the SIS epidemic and protection adoption behavior, its equilibria and stability for a large population of homogeneous agents, while <cit.> have studied the above phenomenon for agents on a network with heterogeneous node degrees. A few related works <cit.> have also examined the coupled evolution of opinion and epidemic as well as opinion and action <cit.>. Finally, similar investigations <cit.> have also been carried out for the class of Susceptible-Infected-Recovered (SIR) epidemics and its variants where recovery grants permanent immunity from future infections. In this paper, we generalize the settings examined in prior works <cit.> to include networked interaction among agents, and further generalize the setting in <cit.> to account for partially effective protection. Specifically, each agent is either susceptible or infected at a given point of time, and chooses whether to adopt protection or remain unprotected. For a susceptible individual, adopting protection gives partial immunity from the disease, while an infected protected individual causes new infection with a smaller probability compared to an infected individual who does not adopt protection. We assume that the agents are divided into different sub-populations depending on their degree, and that the epidemic evolution is governed by the degree-based mean-field (DBMF) approximation of the SIS epidemic <cit.>. For the proposed setting, we define the utility of each sub-population which depends on their individual infection status, chosen action and the disease state and strategies adopted by the entire population. We formulate the coupled disease-behavior dynamics, and leverage time-scale separation arguments to derive the Nash equilibrium strategies for a given epidemic state, i.e., we assume that the evolution of protection adoption is faster than the evolution of the disease. Thus, our work is complementary to the closely related setting <cit.> which assumed the epidemic dynamics to be the faster dynamics and the behavior adoption to be the slower dynamics. We then derive the (slower) dynamics of epidemic evolution when all agents adopt their equilibrium strategies; this dynamics takes the form of a switched or hybrid system. We rigorously prove the existence and uniqueness of its equilibrium by leveraging several structural properties of the endemic equilibrium. Numerical simulations show that the coupled dynamics converges to the equilibrium. In addition, we numerically illustrate the impacts of heterogeneous node degrees, degree-dependent infection rates, and cost of protection adoption on the expected fraction of infected nodes at the equilibrium. § NETWORKED SIS EPIDEMIC UNDER PARTIALLY EFFECTIVE PROTECTION §.§ Degree-Based Mean-Field Approximation We consider a large population of agents, where each agent has a specific degree (number of neighbors) from the set ∈{1,2,…,d^max}. Let y^d(t) ∈ [0,1] be the proportion of agents with degree d ∈ that is infected at time t, with 1-y^d(t) be the proportion that is susceptible. Let z^d_(t) and z^d_(t) denote the proportion of susceptible and infected agents with degree d that remain unprotected at time t, respectively. Let 𝐳 := {z^d_,z^d_}_d∈ the strategy profile of the entire population, and let 𝐱 := {y^d,z^d_,z^d_}_d∈ denote the (time-varying) social state. An infected individual with degree d transmits the infection with probability ^d ∈ (0,1) when it adopts protection, and with probability ^d ∈ (0,1) when it is unprotected. A susceptible individual that adopts protection is α∈ (0,1) times (less) likely to become infected compared to a susceptible unprotected individual. Finally, an infected individual recovers with probability γ∈ (0,1). Following the DBMF approximation of the SIS epidemic model, the infected proportion y^d(t) evolves in continuous-time as ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) (z^d_(t) + α (1-z^d_(t))) d Θ(𝐱(t)), where Θ(𝐱) is the probability with which a randomly chosen neighbor of a node with degree d transmits infection to it. The quantity (z^d_(t) + α (1-z^d_(t))) captures the fact that among the susceptible proportion, z^d_(t) fraction does not adopt protection and encounters an infection probability given by d Θ(𝐱(t)), while 1-z^d_(t) fraction adopts protection and encounters a smaller infection probability α d Θ(𝐱(t)). We define Θ(𝐱) := ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) y^d ], where m_d is the proportion of nodes with degree d in the entire population and d^𝚊𝚟𝚐 = ∑_d ∈ dm_d. Without loss of generality, we assume m_d > 0 for all d ∈. The first term specifies the probability of a randomly chosen neighbor having degree d in accordance with the configuration model <cit.>. The second term denotes the probability of becoming infected if it comes in contact with an infected neighbor of degree d which depends on the strategy adopted by infected agents having degree d. The third term denotes the probability of the neighbor with degree d being infected in the first place. We now analyze the steady-state of the epidemic dynamics (<ref>) for a given strategy profile 𝐳 of the population. By setting ẏ^d(t)=0, we obtain γ y^d = (1-y^d) (z^d_ + α (1-z^d_)) d Θ(𝐳)) (γ + (z^d_ + α (1-z^d_)) d Θ(𝐳)) y^d = (z^d_ + α (1-z^d_)) d Θ(𝐳) y^d = (z^d_ + α (1-z^d_)) d Θ(𝐳)/γ + (z^d_ + α (1-z^d_)) d Θ(𝐳) Θ(𝐳) = ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) × (z^d_ + α (1-z^d_)) d Θ(𝐳)/γ + (z^d_ + α (1-z^d_)) d Θ(𝐳)] Θ(𝐳) [1- ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) × (z^d_ + α (1-z^d_)) d /γ + (z^d_ + α (1-z^d_)) d Θ(𝐳)]] = 0. Note that (<ref>) is obtained by substituting the expression of y^d obtained in (<ref>) in the definition of Θ(𝐳) given in (<ref>). It now follows that Θ(𝐳)=0 is always a solution of (<ref>) which corresponds to the disease-free equilibrium. In addition, there may be nonzero solution(s) Θ(𝐳) of (<ref>) depending on the strategy profile 𝐳 and other parameters as formalized in the following lemma. Equation (<ref>) admits a nonzero solution Θ^⋆(𝐳) if and only if 1 < 1/γ∑_d ∈[d^2m_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) (z^d_ + α (1-z^d_))]. Furthermore, Θ^⋆(𝐳)=1 is not a solution of (<ref>). Note that for Θ^⋆(𝐳) > 0 to be a solution of (<ref>), we must have 1 = ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) × (z^d_ + α (1-z^d_)) d /γ + (z^d_ + α (1-z^d_)) d Θ^⋆(𝐳)]. Note that the R.H.S. is monotonically decreasing in Θ^⋆(𝐳). At Θ^⋆(𝐳) = 1, we have ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) (z^d_ + α (1-z^d_)) d /γ + (z^d_ + α (1-z^d_)) d] < ∑_d ∈[dm_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) ] < ∑_d ∈dm_d/d^𝚊𝚟𝚐 = 1. Therefore, Θ^⋆(𝐳) = 1 is not a solution of (<ref>). Therefore, a nonzero solution exists if and only if the R.H.S. exceeds 1 at Θ^⋆(𝐳) = 0. In other words, 1 < 1/γ∑_d ∈[d^2m_d/d^𝚊𝚟𝚐 (^dz^d_+^d(1-z^d_)) (z^d_ + α (1-z^d_))]. This concludes the proof. In the following subsection, we analyze the equilibrium behavior when agents adopt protection in a game-theoretic manner. §.§ Game-Theoretic Protection Adoption We now define the payoff vector of the agents. A susceptible agent aims to balance the trade-off between the cost of adopting protection, denoted by the parameter > 0, and the expected loss upon becoming infected. The expected loss is computed as the product of loss upon infection, captured by a parameter L>0, and the instantaneous probability of becoming infected. The latter quantity depends on the current social state 𝐱, degree of the agent, and the action chosen by the agent. Formally, for a susceptible agent with degree d, we define its payoffs to be F^d_(𝐱) = - L d Θ(𝐱), F^d_(𝐱) = - - L α d Θ(𝐱), if the agent remains unprotected and adopts protection, respectively. In particular, when the agent adopts protection, it encounters an additional cost though its probability of becoming infected reduces due to the multiplying factor α∈ (0,1). In contrast, an infected agent is already infected, and there is no immediate risk of becoming infected. Consequently, we define F^d_(𝐱) = - c_, F^d_(𝐱) = - c_, to be the payoff of an infected agent if it remains unprotected and adopts protection, respectively. The parameter c_ > 0 captures the penalty imposed on an infected agent if it does not adopt protection (or adhere to quarantine norms), while c_ > 0 captures the inconvenience caused due to adopting protection while being sick. Note that the payoffs do not depend on the degree d or social state 𝐱. We further assume c_ > c_, which indicates that infected agents prefer to adopt protection. We assume that agents revise their protection adoption strategies following the replicator dynamics <cit.>. We further assume that agents only replicate the strategies of other agents who have the same degree and infection status, i.e., susceptible individuals with degree d only replicate the strategies of other susceptible individuals of the same degree d (likewise for infected individuals). Consequently, the proportion of unprotected susceptible nodes of degree d evolves as ż^d_𝚂(t) = z^d_𝚂(t)(1-z^d_𝚂(t)) [ F^d_(𝐱(t)) - F^d_(𝐱(t)) ] = z^d_𝚂(t)(1-z^d_𝚂(t)) [ - L(1-α) d Θ(𝐱(t)) ]. Similarly, for infected individuals, we have ż^d_𝙸(t) = z^d_𝙸(t)(1-z^d_𝙸(t)) (c_-c_). Thus, equations (<ref>), (<ref>) and (<ref>) characterize the coupled evolution of the epidemic and population states at the same time-scale. The above set of dynamics has a large number of equilibrium points due to the structure of the replicator dynamics which induces stationary points at both 0 and 1. In order to obtain further insights into the behavior of the above system, and characterize the equilibrium points, we analyze the coupled dynamics under timescale separation. Motivated by the past work <cit.>, we focus on the case in which the replicator dynamics evolves faster than the epidemic dynamics. This is a reasonable assumption as agents are likely to adjust their behavior faster than the spread of the epidemic due to increased awareness derived from conventional as well as social media. The coupled dynamics can now be written as a slow-fast system given by ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) (z^d_(t) + α (1-z^d_(t))) d Θ(𝐱(t)), ϵż^d_𝚂(t) = z^d_𝚂(t)(1-z^d_𝚂(t)) [ - L(1-α) d Θ(𝐱(t)) ], ϵż^d_𝙸(t) = z^d_𝙸(t)(1-z^d_𝙸(t)) (c_-c_), for all d ∈, and where ϵ∈ (0,1) is a timescale separation variable <cit.>. We first characterize the behavior of the agents (captured by (<ref>) and (<ref>)) for a given infection state 𝐲 = {y^d}_d ∈. It follows from (<ref>) and our assumption c_ > c_ that infected agents strictly prefer to adopt protection irrespective of the social state, and as a result, z^d_ = 0 is the unique stable equilibrium point of (<ref>). From the R.H.S. of (<ref>), it follows that when z^d_ = 0, Θ(𝐱) does not depend on z^d_ when 𝐲 is specified. Therefore, with a slight abuse of notation, we define Θ(𝐲) := ∑_d ∈[dm_d/d^𝚊𝚟𝚐^d y^d ]. Now, for a susceptible agent, adopting protection is strictly preferred if and only if F^d_(𝐱) < F^d_(𝐱) - L d Θ(𝐲) < - - L α d Θ(𝐲) < L(1-α)d Θ(𝐲) Θ(𝐲) > /L(1-α)d =: Θ^d_th. In other words, the optimal strategy of a susceptible agent with degree d depends on whether Θ(𝐲) exceeds the degree-specific threshold Θ^d_th defined in (<ref>). If Θ(𝐲) > Θ^d_th, then z^d_ = 0 is the stable equilibrium of (<ref>), while if Θ(𝐲) < Θ^d_th, then the z^d_ = 1 is the stable equilibrium of (<ref>). If Θ(𝐲) = Θ^d_th, then any z^d_∈ [0,1] could emerge as the equilibrium of (<ref>). Thus, at a given 𝐲, the replicator dynamics (which is the fast system since ϵ<1) associated with both infected and susceptible agents has a unique stable equilibrium point except at a point with measure zero. We now state the reduced dynamics for the epidemic (which is the slow system) as Θ(𝐲(t)) < Θ^d_th: ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) d Θ(𝐲(t)), Θ(𝐲(t)) = Θ^d_th: ẏ^d(t) ∈{ -γ y^d(t) + (1-y^d(t)) × (z^d_ + α (1-z^d_)) d Θ(𝐲(t)) | z^d_∈ [0,1] }, Θ(𝐲(t)) > Θ^d_th: ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) α d Θ(𝐲(t)). The above dynamics approximates the coupled dynamics (<ref>) as ϵ→ 0, i.e., when individuals adopt protection in a strategic manner to maximize their payoffs as a function of the current epidemic state. Note that (<ref>) admits a Filippov solution <cit.> solution because the R.H.S. of (<ref>) is measurable and bounded. The dynamical system (<ref>) can be viewed as the dynamics of epidemic evolution when central authorities restrict interaction of nodes of a certain degree d whenever Θ(𝐲(t)) exceeds the threshold Θ^d_th. § ANALYSIS OF EQUILIBRIUM In this section, we characterize the existence and uniqueness of equilibrium of the dynamics (<ref>). First observe that the thresholds Θ^d_th, defined in (<ref>), are monotonically decreasing in the degree d, i.e., agents with a larger degree switch to adopting protection for a smaller value of Θ(𝐲(t)). Let d_min be the smallest degree for which Θ^d_min_th < 1. Then, we divide the region [0,1] into (d^max-d_min+2) number of intervals, denoted {_d^max+1,_d^max,…,_d_min} such that _d^max+1 := [0,Θ^d^max_th), _d_min := (Θ^d_min_th,1], _d := (Θ^d_th,Θ^d-1_th), for d ∈{d^max,d^max-1,…,d_min+1}. We will now closely examine the dynamics (<ref>) when Θ(𝐲(t)) belongs to one of the intervals as stated above. If Θ(𝐲(t)) ∈_d^⋆, then Θ(𝐲(t)) > Θ^d_th for all degree d ≥ d^⋆, and Θ(𝐲(t)) < Θ^d_th for all degree d < d^⋆. For this particular regime, the dynamics (<ref>) can be stated as: d < d^⋆: ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) × d ∑_d' ∈[d'm_d'/d^𝚊𝚟𝚐^d' y^d'(t) ], d ≥ d^⋆: ẏ^d(t) = -γ y^d(t) + (1-y^d(t)) × α d ∑_d' ∈[d'm_d'/d^𝚊𝚟𝚐^d' y^d'(t) ]. Before analyzing the equilibria of (<ref>), we prove the following result on the equilibria of (<ref>). We first define the following quantity: (d^⋆) := ∑^d^⋆-1_d=1d^2m_d^d/d^𝚊𝚟𝚐γ + ∑^d^max_d = d^⋆α d^2m_d^d/d^𝚊𝚟𝚐γ. Consider the dynamics (<ref>) for a specified d^⋆∈{d_min,…,d^max+1}. We have the following characterization of its equilibria. * If (d^⋆) ≤ 1, then the disease-free equilibrium is the only equilibrium of (<ref>). * If (d^⋆) > 1, then in addition to the disease-free equilibrium, there exists a unique nonzero endemic equilibrium of (<ref>). The proof is presented in Appendix <ref>, and leverages connection between the dynamics in (<ref>), and the N-Intertwined mean-field approximation (NIMFA) of the networked SIS epidemic dynamics <cit.>. When (d^⋆) > 1, we denote the endemic equilibrium with 𝐲_𝙴𝙴(d^⋆). Following (<ref>), the quantity Θ(𝐲_𝙴𝙴(d^⋆)) at the endemic equilibrium is the unique value satisfying 1 = ∑^d^⋆-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ(𝐲_𝙴𝙴(d^⋆))] + ∑^d^max_d =d^⋆[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ(𝐲_𝙴𝙴(d^⋆))]. If (d^⋆) ≤ 1, we define Θ(𝐲_𝙴𝙴(d^⋆))=0. The following lemma establishes monotonicity of (d^⋆) and Θ(𝐲_𝙴𝙴(d^⋆)). The quantities (d^⋆) and Θ(𝐲_𝙴𝙴(d^⋆)) are monotonically increasing in d^⋆. Note that as d^⋆ increases, some entries move from the second summation to the first summation in the definition of (d^⋆), and the terms in the first summation are larger because α∈ (0,1). Similarly, if d^⋆ increases and Θ(𝐲_𝙴𝙴) remains unchanged, then the R.H.S. of (<ref>) increases. In order to achieve R.H.S. equal to 1, Θ(𝐲_𝙴𝙴(d^⋆)) needs to increase. We are now ready to establish the existence and uniqueness of the equilibrium of the dynamics (<ref>) by leveraging the results established above. For the dynamics (<ref>), we have the following characterization of its equilibria. * If (d^max+1) ≤ 1, then the disease-free equilibrium is the only equilibrium of (<ref>). * Now suppose (d^max+1) > 1. Let Θ^d^max+1_th:=0 for convenience. Let d^𝚎𝚚∈{d_min,d_min+1,…,d^max,d^max+1} be the smallest degree for which Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) > Θ^d^𝚎𝚚_th. Then, we have the following two cases. * If Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ∈_d^𝚎𝚚, then 𝐲_𝙴𝙴(d^𝚎𝚚) is the unique endemic equilibrium of (<ref>). * If Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ≥Θ^d^𝚎𝚚-1_th, then there exists a unique endemic equilibrium with {y^d}_d ∈ satisfying (<ref>) with Θ(𝐲) = Θ^d^𝚎𝚚-1_th. The proof is presented in Appendix <ref>, and it relies on the monotonicity properties established in Lemma <ref> as well as Proposition <ref>. Note that parameters such as and L affect the outcome by influencing the thresholds Θ^d_th, while α affects both the thresholds as well as Θ(𝐲_𝙴𝙴(d)). In Case 2(a) of the theorem, all susceptible agents with degree d ≥ d^𝚎𝚚 adopt protection and all susceptible agents with degree d < d^𝚎𝚚 remain unprotected. In contrast, in Case 2(b), the proportion of susceptible agents with degree d^𝚎𝚚 that adopt protection is strictly between 0 and 1. It is easy to see that the endemic equilibrium identified in Theorem <ref> together with the above protection adoption scheme constitutes an equilibrium of the coupled dynamics (<ref>), (<ref>) and (<ref>). § NUMERICAL RESULTS In this section, we numerically illustrate the convergence of the coupled dynamics and characteristics of the endemic equilibrium as a function of different parameters of the game, including cost of adopting protection, infection rate, effectiveness of protection and heterogeneous degree distributions. §.§ Convergence to Endemic Equilibrium First we show that the coupled dynamics converges to the endemic equilibrium as established in Theorem <ref>. The values of different parameters used in this subsection are reported in Table <ref>. In particular, the values of y^d(0) and z^d_𝚂(0) denote the initial proportion of infected agents and unprotected susceptible agents. These initial conditions are used to compute the trajectories for all degrees using an Euler discretization of (<ref>) with discretization parameter 0.01 and ϵ=1. The infection rates are identical for all degrees. We consider a network with the set of degrees = {1,2,3,4} with the proportion of nodes with each of the above degrees being 0.25, i.e., m_d = 0.25 for all d ∈. The values of the thresholds Θ^d_th for two different values of c_𝙿 as well as the values Θ(𝐲_𝙴𝙴(d)) for d ∈{2,3,4,5} are reported in Table <ref>. It follows from Table <ref> that when c_𝙿=10, d=3 is the smallest degree for which Θ(𝐲_𝙴𝙴(3)) = 0.4231 > 0.33 = Θ^3_th. Furthermore, 0.4231 ∈ℐ_3 = (0.33,0.5). Consequently, the value of Θ(𝐲) at the endemic equilibrium should be 0.4231 following Theorem <ref>. This is precisely what we observe in the top row of Figure <ref>. The plots in the left panel of the top row of Figure <ref> show the evolution of infected proportion y^d(t) for different degrees of the network as well as the expected fraction of infected nodes y^𝚊𝚟𝚐(t) = ∑_d ∈ m_d y^d(t) shown in the thick black line. The infected proportions converge to the unique endemic equilibrium within 1500 time steps. The plots in the middle panel show the values of thresholds Θ^d_th and Θ(𝐲(t)), and indicate that Θ(𝐲(t)) converges to the value 0.4231 which lies in the interval ℐ_3 = (0.33,0.5). The plots in the right panel show the evolution of the proportion unprotected susceptible agents z^d_𝚂(t) for different degrees of the network. At the onset of the pandemic, when Θ(𝐲(t)) was smaller than the thresholds for all the degrees, the quantity z^d_𝚂(t) increased close to 1 for all d. Eventually, as Θ(𝐲(t)) exceeded the thresholds Θ^d_th for d = 4 and d=3, z^4_𝚂(t) and z^3_𝚂(t) started to decline and eventually converged to 0 in accordance with the discussion in Section <ref>. We now examine the case where c_𝙿=8. According to Table <ref>, d=3 is the smallest degree for which Θ(𝐲_𝙴𝙴(3)) = 0.4231 > 0.2667 = Θ^3_th. However, in this case, 0.4231 ∉ℐ_3 = (0.2667,0.4). Consequently, the value of Θ(𝐲) at the endemic equilibrium should be 0.4 following Theorem <ref>. This is precisely what we observe in the bottom row of Figure <ref>. In particular, the plot in the middle panel shows that Θ(𝐲(t)) converges to Θ^2_th. The plot in the right panel shows that z^d_𝚂(t) converges to 0 for d = 3 and d=4, while it converges to 1 for d=1. However, z^2_𝚂(t) converges to an intermediate value. Thus, the social state is shown to converge to the unique endemic equilibrium as postulated in Theorem <ref> in the both the cases. The convergence is slower in this case compared the case with c_𝙿=10. §.§ Heterogeneous Degree Distribution and Infection Rates In the previous subsection, the infection rate β^d_𝙿 and the proportion m_d were identical for all degrees. We now illustrate the impact of heterogeneity in these parameters. Let = {1,2,3,4} as before. We consider the following two cases. * Case 1: β^d_𝙿 = 0.1 for d ∈{1,2}, and β^d_𝙿 = 0.6 for d ∈{3,4}. * Case 2: β^d_𝙿 = 0.6 for d ∈{1,2}, and β^d_𝙿 = 0.1 for d ∈{3,4}. The values of other parameters are given in Table <ref>. We induce heterogeneity in the degree distribution by assuming that m_2 = m_3 = 0.05, and by varying m_4 from 0.05 to 0.85. The value of m_1 is given by 1-m_2-m_3-m_4. Figure <ref> shows the variation of the expected fraction of infected nodes (y^𝚊𝚟𝚐) at the endemic equilibrium for different values of m_4 and for both the cases of infection rates stated above. For the infection rates stated in Case 1, increase in m_4 leads to increase y^𝚊𝚟𝚐 at the endemic equilibrium. This is expected since nodes with higher degree have a larger infection rate, and as the proportion of nodes with degree d=4 increases, the overall infection prevalence shows an increase. In contrast for Case 2, increase in m_4 leads to increase in the proportion of nodes with smaller β^d_𝙿, and decrease in the proportion of nodes with larger β^d_𝙿. As a result, the overall infection prevalence shows a decline, though the decrease in not monotonic. Thus, while intuition may suggest that a larger proportion of high degree nodes would lead to larger infection prevalence, this outcome is not always true when the infection rates are heterogeneous. Rather, an increase in the proportion of nodes with larger infection rates leads to a larger prevalence of the epidemic. §.§ Comparison among Degree Distributions In this subsection, we illustrate the impacts of effectiveness of protection α, infection probability ^d and cost of protection on the infection level at the endemic equilibrium. We let the set of degrees := {1,2,…,19,20}, and consider three different degree distributions given by * Uniform distribution with m_d = 0.05 for all d ∈, * Binomial distribution with m_d given by the Binomial probability mass function with n=20 and p=0.525, and * Bimodal distribution with m_d = 0.25 for d ∈{1,2,19,20}. For each of the above degree distributions, the average degree d^𝚊𝚟𝚐 = 10.5. The remaining parameters are set according to Table <ref>. First we examine the impact of the effectiveness of protection, captured by the parameter α. The plots in the left panel of Figure <ref> shows that as α increases, i.e., the protection becomes less effective, the expected fraction of infected nodes (y^𝚊𝚟𝚐) as well as Θ(𝐲) at the endemic equilibrium increase, before saturating for sufficiently large α. The plot on the top row also shows that for the entire range of α, y^𝚊𝚟𝚐 under Binomial distribution is larger compared to the network with Uniform distribution followed by the Bimodal distribution. Thus, when the degree distribution of the network is heterogeneous, the expected fraction of infected nodes is smaller compared to a relatively homogeneous degree distribution. However, a similar observation does not hold for Θ(𝐲). The figure on the bottom row shows that for smaller α, Θ(𝐲) is larger under the Binomial distribution, while for larger α, Θ(𝐲) is larger under the Bimodal distribution. Next, we examine the impact of the parameter ^d which captures the probability with which an infected protected individual transmits infection to others. The plots in the middle panel of Figure <ref> shows that as ^d increases, both y^𝚊𝚟𝚐 as well as Θ(𝐲) at the endemic equilibrium increase. Here also we observe that y^𝚊𝚟𝚐 is larger under the Binomial distribution, followed by Uniform distribution and Bimodal distribution. However, the value of Θ(𝐲) is approximately equal for all three degree distributions. Finally, the plots in the right panel of Figure <ref> show the impact of the cost of protection adoption . As increases, both y^𝚊𝚟𝚐 as well as Θ(𝐲) tend to increase. Furthermore, both y^𝚊𝚟𝚐 as well as Θ(𝐲) are larger under the Binomial distribution, followed by Uniform and Bimodal distributions, respectively. To summarize, the above numerical results yield the following insights. * When the proportion of nodes with a larger infection rate increases, y^𝚊𝚟𝚐 at the endemic equilibrium tends to increase. * Similarly, y^𝚊𝚟𝚐 increases when protection becomes expensive (larger ) and less effective in both preventing (larger α) as well as transmitting (larger ^d) infection. * When the degree distribution is nearly homogeneous (e.g., Binomial distribution), y^𝚊𝚟𝚐 tends to be larger compared to when the degree distribution is largely heterogeneous (e.g., Bimodal and Uniform distributions). § CONCLUSION We analyzed the problem of strategic adoption of partially effective protection in large-scale networks in the population game framework. We derived the coupled epidemic-behavioral dynamics and relied on time-scale separation to derive the epidemic dynamics under optimal protection adoption strategies of the agents which depends on their degree. We then rigorously established the existence and uniqueness of stationary equilibrium of the above dynamics. We numerically illustrated the convergence of the dynamics to the equilibrium as well as the impacts of heterogeneous node degrees, infection rates and cost of protection adoption on the epidemic prevalence at the equilibrium. We aim to leverage the insights derived from this work to design intervention schemes which incentivizes protection adoption among users and reduce the prevalence of epidemics in follow up works. In addition, analyzing the protection adoption behavior of non-myopic or forward-looking agents for networked SIS as well as other classes of epidemic models remain as promising directions for future research. § OMITTED PROOFS We first present an important result on the convergence and stability of the networked SIS epidemic under the N-intertwined mean-field approximation (NIMFA) followed by presenting the proofs omitted from the main text. §.§ NIMFA of the SIS Epidemic Model Consider a directed graph or network, denoted = (,) with being the set of nodes and being the set of directed edges. Let ||=n, and A ∈^n × n_+ be the adjacency matrix of the graph. In particular, a_ij = 0 if and only if (j,i) ∉. Let p_i(t) ∈ [0,1] denote the probability of node i being infected at time t. According to the NIMFA of the SIS epidemic <cit.>, the infection probability evolves as d p_i(t)/dt = -γ p_i(t) + (1-p_i(t)) ∑^n_j =1 a_ij p_j(t), where a_ij≥ 0 denotes the probability of node i becoming infected by node j, and γ > 0 denotes the rate with which an infected node recovers from the disease. The above dynamics can be written in vector form as ṗ(t) = (A-D) p(t) - P(t) A p(t), where D = 𝚍𝚒𝚊𝚐(γ,γ,…,γ) is the diagonal matrix of all recovery rates, and P(t) = 𝚍𝚒𝚊𝚐(p(t)). We now reproduce the following theorem from past works regarding the existence and stability of the equilibrium points of (<ref>). Suppose the graph is strongly connected. Then, * the disease-free equilibrium (DFE) with p^⋆_𝙳𝙵𝙴 = 0_n is globally asymptotically stable (GAS) if and only if the spectral radius ρ(D^-1A) ≤ 1, and * a unique endemic equilibrium (EE) with p^⋆_𝙴𝙴≫ 0_n exists if and only if ρ(D^-1A) > 1. If p(0) ≠ 0 and ρ(D^-1A) > 1, then the endemic equilibrium is GAS. §.§ Proof of Proposition <ref> For the proof, we exploit Theorem <ref> after establishing the equivalence between the dynamics (<ref>) and the N-Intertwined Mean-Field Approximation (NIMFA) of the SIS epidemic model on a directed network (<ref>). To this end, construct a directed graph with d^max nodes, i.e., each degree d ∈ is treated as a node of . We define the adjacency matrix  where the weight of the edge between two nodes d and d' is given by [Â]_d,d' := d/d^𝚊𝚟𝚐 (d'm_d'^d'), for d < d^⋆, α d/d^𝚊𝚟𝚐 (d'm_d'^d'), for d ≥ d^⋆. It is now easy to see that the dynamics (<ref>) is equivalent to the NIMFA approximation of the SIS epidemic (<ref>) on the network with adjacency matrix Â. In addition, all entries of  are nonzero (due to our assumption that m_d > 0 for all d ∈), and hence, is strongly connected. Furthermore, the matrix D^-1 has rank one since it is the outer product of two vectors given by D^-1 = 𝐯_1·𝐯_2^⊤, where 𝐯_1 = 1/d^𝚊𝚟𝚐γ[ 1; 2; ⋮; d^⋆ - 1; α d^⋆; ⋮; α d^max ], 𝐯_2 = [ m_1^1; ⋮; d'm_d'^d'; ⋮; d^maxm_d^max^d^max ]. As a result, the spectral radius ρ(D^-1Â) = 𝐯_1^⊤𝐯_2 = ∑^d^⋆-1_d=1d^2m_d^d/d^𝚊𝚟𝚐γ + ∑^d^max_d = d^⋆α d^2m_d^d/d^𝚊𝚟𝚐γ =: (d^⋆). The result now follows from Theorem <ref>. §.§ Proof of Theorem <ref> Part 1: (d^max+1) ≤ 1. It follows from Lemma <ref> that (d) < 1 for all d ∈{1,2,…,d^max}. Assume on the contrary that there exists a nonzero endemic equilibrium denoted 𝐲^⋆_𝙴𝙴 with Θ(𝐲^⋆_𝙴𝙴) ∈_d' for some d' ∈{d_min,2,…,d^max+1}. However, since (d') < 1, the disease-free equilibrium is the only equilibrium of the dynamics (<ref>) with d^⋆=d'. Since the dynamics (<ref>) coincides with (<ref>) over this interval, there does not exist an equilibrium with Θ(𝐲^⋆_𝙴𝙴) ∈_d' for either (<ref>) or (<ref>). Now, suppose Θ(𝐲^⋆_𝙴𝙴) = Θ^d'_th for some d'∈. Then the strategy profile of susceptible agents needs to satisfy z^d_ = 0, if d > d' or Θ^d_th < Θ^d'_th, z^d'_, if d=d' 1, if d < d' or Θ^d_th > Θ^d'_th, for some z^d'_∈ [0,1]. It follows from (<ref>) that Θ^d'_th satisfies 1 = ∑_d ∈[dm_d/d^𝚊𝚟𝚐^d (z^d_ + α (1-z^d_)) d /γ + (z^d_ + α (1-z^d_)) d Θ^d'_th] = ∑^d'-1_d=1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + [d'm_d'/d^𝚊𝚟𝚐^d'(z^d'_ + α (1-z^d'_)) d' /γ + (z^d'_ + α (1-z^d'_)) d' Θ^d'_th] + ∑^d^max_d=d'+1[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] < ∑^d^max_d=1d^2m_d^d/d^𝚊𝚟𝚐 (γ + α d Θ^d'_th) < ∑^d^max_d=1d^2m_d^d/d^𝚊𝚟𝚐γ = (d^max+1). The inequality follows because the second term in the R.H.S. of the (<ref>) is monotonically increasing in z^d'_, and the third term is monotonically increasing in α. However, this is a contradiction since (d^max+1) ≤ 1 in this regime. Thus, there does not exist an endemic equilibrium of (<ref>) with Θ(𝐲^⋆_𝙴𝙴) > 0. Part 2: (d^max+1) > 1. Following the definition of Θ^d_th and Lemma <ref>, we have 0 = Θ^d^max+1_th < Θ^d^max_th < Θ^d^max-1_th < … < Θ^d_min_th < 1 ≤…Θ^1_th, Θ(𝐲_𝙴𝙴(1)) ≤Θ(𝐲_𝙴𝙴(2)) ≤…≤Θ(𝐲_𝙴𝙴(d^max)) ≤Θ(𝐲_𝙴𝙴(d^max+1)). From the definition of d^𝚎𝚚, we have Θ(𝐲_𝙴𝙴(d)) > Θ^d_th, for d ≥ d^𝚎𝚚 and Θ(𝐲_𝙴𝙴(d)) ≤Θ^d_th for d < d^𝚎𝚚. We tackle the two cases separately. Case (a): Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ∈ (Θ^d^𝚎𝚚_th,min(1,Θ^d^𝚎𝚚-1_th)). Note that the dynamics (<ref>) coincides with (<ref>) over the interval _d^𝚎𝚚. Since Θ(𝐲_𝙴𝙴(d^𝚎𝚚))>0, it is necessarily the case that (d^𝚎𝚚) > 1. Consequently, the dynamics (<ref>) with d^⋆ = d^𝚎𝚚 has a unique nonzero endemic equilibrium at which Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ∈_d^𝚎𝚚. Therefore, 𝐲_𝙴𝙴(d^𝚎𝚚) is a nonzero endemic equilibrium of (<ref>). It remains to show that there does not exist any other nonzero endemic equilibrium of (<ref>). Suppose there exists another nonzero endemic equilibrium 𝐲_𝙴𝙴,2 with Θ(𝐲_𝙴𝙴,2) ∈_d' for some d' ≠ d^𝚎𝚚. We examine the following two possibilities. * Suppose d' > d^𝚎𝚚. The dynamics (<ref>) with d^⋆ = d' > d^𝚎𝚚 has a unique nonzero endemic equilibrium, and following Lemma <ref>, we have Θ(𝐲_𝙴𝙴(d')) > Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) > Θ^d^𝚎𝚚_th. However, Θ(𝐲_𝙴𝙴(d')) ∉_d' because _d' = (Θ^d'_th,Θ^d'-1_th), and Θ^d'-1_th≤Θ^d^𝚎𝚚_th. As a result, 𝐲_𝙴𝙴(d') is not an endemic equilibrium for (<ref>). * Now, let d' < d^𝚎𝚚. The dynamics (<ref>) with d^⋆ = d' < d^𝚎𝚚 has a unique nonzero endemic equilibrium, and following the definition of d^𝚎𝚚, we have Θ(𝐲_𝙴𝙴(d')) < Θ^d'_th. As a result, Θ(𝐲_𝙴𝙴(d')) ∉_d' = (Θ^d'_th,Θ^d'-1_th). Thus, 𝐲_𝙴𝙴(d') is not an endemic equilibrium for (<ref>). Now, suppose there exists another nonzero endemic equilibrium 𝐲_𝙴𝙴,2 such that Θ(𝐲_𝙴𝙴,2) = Θ^d'_th for some d'. Then, Θ^d'_th satisfies (<ref>). Let d' ≥ d^𝚎𝚚. Since Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ∈ (Θ^d^𝚎𝚚_th,min(1,Θ^d^𝚎𝚚-1_th)), we have Θ^d'_th≤Θ^d^𝚎𝚚_th < Θ(𝐲_𝙴𝙴(d^𝚎𝚚)). Setting d^⋆ = d^𝚎𝚚 in (<ref>), we obtain 1 = ∑^d^𝚎𝚚-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] ≤∑^d^𝚎𝚚-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] = ∑^d^𝚎𝚚-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + ∑^d'-1_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] + d'm_d'/d^𝚊𝚟𝚐α d' ^d'/γ + α d' Θ^d'_th + ∑^d^max_d =d'+1[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] < ∑^d^'-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + [d'm_d'/d^𝚊𝚟𝚐^d'(z^d'_ + α (1-z^d'_)) d' /γ + (z^d'_ + α (1-z^d'_)) d' Θ^d'_th] + ∑^d^max_d =d'+1[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] for any z^d'_∈ [0,1] due to the monotonicity of the second term in z^d'_ and α. However, this is in contradiction to (<ref>) which requires the R.H.S. to be equal to 1. Now suppose d' < d^𝚎𝚚. Since Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ∈ (Θ^d^𝚎𝚚_th,min(1,Θ^d^𝚎𝚚-1_th)), we have Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) < Θ^d'_th. Proceeding as before, we obtain 1 = ∑^d^𝚎𝚚-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] > ∑^d^𝚎𝚚-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] = ∑^d^'-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + d'm_d'/d^𝚊𝚟𝚐d' ^d'/γ + d' Θ^d'_th + ∑^d^𝚎𝚚-1_d =d'+1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] ≥∑^d^'-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + d'm_d'/d^𝚊𝚟𝚐d' ^d'/γ + d' Θ^d'_th + ∑^d^max_d =d^'+1[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] ≥∑^d^'-1_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d'_th] + [d'm_d'/d^𝚊𝚟𝚐^d'(z^d'_ + α (1-z^d'_)) d' /γ + (z^d'_ + α (1-z^d'_)) d' Θ^d'_th] + ∑^d^max_d =d'+1[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d'_th] for any z^d'_∈ [0,1]. As before, this is in contradiction to (<ref>). Therefore, 𝐲_𝙴𝙴(d^𝚎𝚚) is the unique endemic equilibrium of (<ref>) in this regime. Case (b): Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) ≥Θ^d^𝚎𝚚-1_th. It follows from the definition of d^𝚎𝚚 that Θ(𝐲_𝙴𝙴(d^𝚎𝚚-1)) ≤Θ^d^𝚎𝚚-1_th. Note further than both Θ(𝐲_𝙴𝙴(d^𝚎𝚚)) and Θ(𝐲_𝙴𝙴(d^𝚎𝚚-1)) can not simultaneously be equal to Θ^d^𝚎𝚚-1_th due to the strict monotonicity established in Lemma <ref>. By setting d^⋆ = d^𝚎𝚚 and d^⋆ = d^𝚎𝚚-1 in (<ref>), we respectively obtain 1 = ∑^d^𝚎𝚚-2_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ(𝐲_𝙴𝙴(d^𝚎𝚚))] + (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐(d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + (d^𝚎𝚚-1) Θ(𝐲_𝙴𝙴(d^𝚎𝚚)), ≤∑^d^𝚎𝚚-2_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d^𝚎𝚚-1_th] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d^𝚎𝚚-1_th] + (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐(d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + (d^𝚎𝚚-1) Θ^d^𝚎𝚚-1_th, 1 = ∑^d^𝚎𝚚-2_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ(𝐲_𝙴𝙴(d^𝚎𝚚-1))] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ(𝐲_𝙴𝙴(d^𝚎𝚚-1))] + (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐α (d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + α (d^𝚎𝚚-1) Θ(𝐲_𝙴𝙴(d^𝚎𝚚-1)) ≥∑^d^𝚎𝚚-2_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d^𝚎𝚚-1_th] + ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d^𝚎𝚚-1_th] + (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐α (d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + α (d^𝚎𝚚-1) Θ^d^𝚎𝚚-1_th. We now define := 1 - ∑^d^𝚎𝚚-2_d =1[dm_d/d^𝚊𝚟𝚐d ^d/γ + d Θ^d^𝚎𝚚-1_th] - ∑^d^max_d =d^𝚎𝚚[dm_d/d^𝚊𝚟𝚐α d ^d/γ + α d Θ^d^𝚎𝚚-1_th] (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐α (d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + α (d^𝚎𝚚-1) Θ^d^𝚎𝚚-1_th≤ ≤(d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐(d^𝚎𝚚-1) ^d^𝚎𝚚-1/γ + (d^𝚎𝚚-1) Θ^d^𝚎𝚚-1_th. Consequently, there exists a unique z̅^d^𝚎𝚚-1_∈ [0,1] at which = (d^𝚎𝚚-1)m_d^𝚎𝚚-1/d^𝚊𝚟𝚐× (d^𝚎𝚚-1) (z̅^d^𝚎𝚚-1_+α(1-z̅^d^𝚎𝚚-1_)) ^d^𝚎𝚚-1/γ + (z̅^d^𝚎𝚚-1_+α(1-z̅^d^𝚎𝚚-1_)) (d^𝚎𝚚-1) Θ^d^𝚎𝚚-1_th. Now, consider the strategy profile of susceptible agents given by z^d_ = 0, if d > d^𝚎𝚚-1, z̅^d^𝚎𝚚-1_, if d=d^𝚎𝚚-1 1, if d < d^𝚎𝚚-1. Let z^d_ = 0 for all d ∈. It is easy to verify that Θ^d^𝚎𝚚-1_th is a nonzero solution of (<ref>) under the above strategy profile. Consequently, the set of infected proportions {y^d}_d ∈ satisfying (<ref>) with Θ^d^𝚎𝚚-1_th constitutes an equilibrium of the dynamics (<ref>). The uniqueness of the endemic equilibrium can be established in a manner analogous to the similar uniqueness result established for Case (a) above, and is omitted in the interest of space. ieeetr
http://arxiv.org/abs/2407.02722v1
20240703002559
Pulse Design of Baseband Flux Control for Adiabatic Controlled-Phase Gates in Superconducting Circuits
[ "Qi Ding", "Alan V. Oppenheim", "Petros T. Boufounos", "Simon Gustavsson", "Jeffrey A. Grover", "Thomas A. Baran", "William D. Oliver" ]
quant-ph
[ "quant-ph" ]
APS/123-QED Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA MIT Lincoln Laboratory, Lexington, MA 02421, USA Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Mitsubishi Electric Research Laboratories, Cambridge, MA 02139, USA Atlantic Quantum, Cambridge, MA 02139 qding@mit.edu william.oliver@mit.edu § ABSTRACT Despite progress towards achieving low error rates with superconducting qubits, error-prone two-qubit gates remain a bottleneck for realizing large-scale quantum computers. Therefore, a systematic framework to design high-fidelity gates becomes imperative. One type of two-qubit gate in superconducting qubits is the controlled-phase (CPHASE) gate, which utilizes a conditional interaction between higher energy levels of the qubits controlled by a baseband flux pulse on one of the qubits or a tunable coupler. In this work, we study an adiabatic implementation of CPHASE gates and formulate the design of the control trajectory for the gate as a pulse-design problem. We show in simulation that the Chebyshev-based trajectory can, in certain cases, enable gates with leakage error lower by an average of roughly 6% when compared to the widely used Slepian-based trajectory. Pulse Design of Baseband Flux Control for Adiabatic Controlled-Phase Gates in Superconducting Circuits William D. Oliver July 8, 2024 ====================================================================================================== § INTRODUCTION High-fidelity entangling gates are one of the fundamental requirements in the pursuit of large-scale fault-tolerant quantum computing <cit.>. Over the past decades, superconducting qubits have emerged as a leading platform for quantum computing, with several advances in terms of gate fidelity, extensibility, and use-case demonstrations <cit.>. These improvements have enabled superconducting quantum computing platforms to begin tackling significant challenges, including the implementation of prototype quantum error correction (QEC) protocols <cit.>. Numerous variants of superconducting qubits and their architecture have been proposed and experimentally demonstrated <cit.>, along with a variety of different schemes for realizing entangling gates <cit.>. Despite these significant advances, two-qubit gate performance continues to limit the development of future fault-tolerant quantum computing systems. Two-qubit entangling gates employed in superconducting qubits can be broadly classified into two categories. The first category encompasses capacitively coupled, fixed-frequency qubits, in some cases mediated by a resonator, where the implementation of two-qubit gates relies on all-microwave control <cit.>. Fixed-frequency qubits typically have longer coherence times and no baseband control lines. However, frequency crowding and potential collisions become increasingly challenging as the system size grows. The second category involves frequency-tunable qubits, where the frequency of the qubits can be adjusted using baseband magnetic flux <cit.>. These qubits can be coupled directly or through frequency-tunable coupling elements. In this architecture, two-qubit gates are typically achieved by applying local baseband magnetic-flux pulses to tune the frequencies of the qubits and/or couplers. It is worth noting that this approach results in increased hardware complexity and susceptibility to flux-related noise, thereby exacerbating experimental calibration challenges. Nevertheless, it mitigates the frequency collision issues, and gates relying on baseband flux control generally exhibit faster operation speed compared to all-microwave-activated gates. Additionally, there exist alternative architectural designs and gate schemes that seek to amalgamate features from both of these categories <cit.>. In this work, we focus on baseband flux control gates with tunable qubits. More specifically, we study controlled-phase (CPHASE) gates and in particular the controlled-Z (CZ) gate, which corresponds to a conditional phase accumulation of π. The fidelity of CPHASE gates depends heavily on the specific pulse shape of the baseband flux, as deviations can cause phase errors and leakage to undesired states. In this work, we first formulate the problem of baseband flux control design as a pulse design problem. In this way, we are able to design the gate by leveraging tools from the signal processing community. Second, we propose a Chebyshev-based trajectory as an alternative to the widely used Slepian-based trajectory. We analytically study the Chebyshev-based trajectory using a two-level system abstraction. Finally, we compare the performance of the Chebyshev-based and Slepian-based trajectories by simulating a CZ gate applied to two capacitively coupled transmon qubits. Simulation results show that the Chebyshev-based trajectory can be designed to induce lower leakage error while maintaining smaller pulse duration, in certain cases. In addition, we show that the proposed Chebyshev-based trajectory can be readily implemented in state-of-the-art hardware by considering practical hardware constraints in simulation. The manuscript is organized as follows. In Section <ref>, we formulate the pulse-design problem using a two-level system abstraction and state explicitly the criterion to be investigated. In Section <ref>, we introduce the definition and examples of finite-length, discrete-time pulses that will be exploited. Then, in Section <ref>, we propose the Chebyshev-based trajectory as an alternative solution compared to the Slepian counterpart. In Section <ref>, we present time-domain simulation results and demonstrate the advantage of the Chebyshev-based trajectory when implementing a CZ gate in two directly coupled transmon qubits. We also study the effect of realistic hardware limitations on these trajectories. We conclude and discuss outlook in Section <ref>. § PROBLEM FORMULATION §.§ The CPHASE gate in tunable transmon qubits The general approach to implementing a CPHASE gate in tunable transmon qubits using baseband flux control is summarized in Appendix <ref>. Two important factors in this design are the leakage error and gate duration. Leakage error refers to unwanted qubit population of the total qubit population outside of the computational subspace after the gate operation. In this case, the dominant leakage is from |11⟩ to |20⟩, since they are intentionally brought into resonance. On resonance, these states will hybridize and open an avoided crossing. Therefore, a trajectory towards the avoided crossing must be sufficiently slow in order for the leakage error to be sufficiently small in the adiabatic implementation of the CPHASE gate. On the other hand, for coherence-limited qubits like superconducting qubits, faster trajectories directly translate to higher fidelity. In other words, the process should be “fast and adiabatic.” Furthermore, as these two factors are intrinsically contradictory, a design of the trajectory should be made to achieve best performance. In this work, as we will explain in more detail in Section <ref>, we refer to this problem as the pulse design or control trajectory design problem. Considerable efforts have been made to the development and experimental validation of a high-fidelity baseband flux-controlled CZ gate. The Slepian-based trajectory is the current standard to implement an adiabatic CZ gate <cit.>. The control trajectory is based on the Slepian window from the use of optimal window functions. This approach is experimentally demonstrated to reach a CZ gate fidelity up to 99.4% <cit.>. Rol et al. <cit.> appended two Slepian-based trajectories together to form a bipolar flux pulse named the Net Zero (NZ) pulse, which is more robust to long time distortions in the control line compared to unipolar ones. The latter is experimentally demonstrated to reach a CZ gate fidelity up to 99.7% <cit.>. Building upon the NZ pulse, Negîrneac et al. <cit.> develop a variation named the sudden net-zero (SNZ) CZ gates, which simplifies the pulse calibration. The Slepian-based trajectory is also utilized to implement a non-adiabatic CZ gate with fidelity 99.76±0.07% in a more sophisticated system consisting of two transmon qubits coupled with a tunable coupler <cit.>. A flat-top Gaussian pulse has been employed to implement CPHASE gates in a similar qubit-coupler-qubit architecture <cit.>. Chu et al. <cit.> also study the CZ gate in a system with a tunable coupler and propose a modified control trajectory by adding prefactor weights to the Slepian-based trajectory. Another general approach <cit.> is to perform repeated experiments using closed-loop feedback to evaluate the current gate performance according to some metrics, and then numerically optimize the pulse, starting from a heuristically decent Slepian-based trajectory <cit.>. §.§ Two-level system abstraction The primary error channel is leakage from |11⟩ to |20⟩ during the gate. In this section, we therefore focus on a simpler two-level abstraction of the problem that couples the diabatic states |11⟩ and |20⟩ to form eigenstates |ψ_-⟩ (“ground state”) and |ψ_+⟩ (“excited state”). Consider a two-level system whose Hamiltonian is H = ε(t)/2σ_z + Δ/2σ_x = 1/2[ ε(t) Δ; Δ -ε(t) ], where Δ is a constant denoting the coupling strength that hybridizes the diabatic states |11⟩ and |20⟩, and ε(t) is a function of time, which dictates the energy difference between the diabatic states. Of particular interest are the two eigenstates |ψ_-⟩, |ψ_+⟩ and the corresponding eigenenergies E_-, E_+ of this system. Solving the eigen-problem for H yields |ψ_-⟩ = [ -sin(θ(t)/2); cos(θ(t)/2) ] and |ψ_+⟩ = [ cos(θ(t)/2); sin(θ(t)/2) ], E_- = -1/2√(ε(t)^2+Δ^2) and E_+ = 1/2√(ε(t)^2+Δ^2) , where θ(t) is defined as θ(t) = arctanΔ/ε(t) . In Appendix <ref>, we review the geometric interpretation of θ(t) on the Bloch sphere. In Section <ref>, θ(t) is considered an intermediate control variable whose trajectory is to be designed. Fig. <ref> depicts the eigenenergies of the two-level system with Hamiltonian H as a function of ε∈ [-∞,+∞]. In this abstracted two-level system, the CPHASE gate problem is transformed into preparing the system in the initial state |ψ_-⟩ and designing ε(t) to vary the instantaneous energy as depicted in the lower plot of Fig. <ref>. In particular, ε(t) varies from ϵ_ini to ϵ_mid and returns to ϵ_ini where ϵ_mid≈ 0. Correspondingly, θ(t) varies from θ_ini=arctan(Δ/ϵ_ini) to θ_mid=arctan(Δ/ϵ_mid) and returns to θ_ini. §.§ Formula for leakage error The formula for the leakage error P_e from |11⟩ to |20⟩ after the pulse is implemented is (see Appendix <ref> for details) P_e = |∫dθ/dte^-i∫^t ω(t')dt'dt |^2/4 , where ω(t') is the time-dependent frequency difference between eigenstates |ψ_-⟩ and |ψ_+⟩ due to the trajectory. A nonlinear time-frame transformation is introduced (see Appendix <ref>), i.e., ω_τdτ = ω(t)dt, which effectively accounts for the time-dependence of ω(t'), making it a time-independent frequency ω_τ. We express the leakage error P_e in this new time frame τ P_e = |∫dθ̃/dτe^-iω_ττdτ|^2/4 , where ω_τ is a constant, τ=τ(t) is a nonlinear function of t, and θ̃(τ) is the transformation of θ(t) in the new time frame τ. In this time frame, we have obtained a simpler form of the leakage error P_e, which can be interpreted as a function of the Fourier transform of dθ̃/dτ evaluated at ω_τ. After designing dθ̃/dτ, we transform back to the original time frame t and obtain the corresponding dθ /dt using the technique described in Appendix <ref>. We discuss the validity of Eq. <ref> along with the nonlinear time-frame transformation in Appendix <ref>. ω_τ can take any constant value, however, as explained in Appendix <ref>, we set ω_τ = Δ for convenience. §.§ Problem statement We propose a problem statement with explicit requirement: we desire the shortest pulse given a specified allowable leakage error. Then, taking advantage of the time and frequency scaling property of the Fourier transform, we transform the problem into a requirement on frequency. The novelty in our problem formulation enables a straightforward comparison between different pulses. Finally, we comment on the continuous time to discrete time transformation. §.§.§ Nomenclature In this section, we more formally define the pulse design or control-trajectory design that is the focus of this work. We consider the case where two qubits are capacitively coupled, one of which is flux-tunable (QB1). Our ultimate goal is to design a baseband flux pulse that changes the external magnetic flux Φ_ext(t) that threads the SQUID loop of QB1 to change its qubit frequency ω_1(t) so that a CPHASE gate with desired characteristics is obtained. In the abstracted two-level system discussed in Section <ref>, the goal is the design of ε(t). Then, an intermediate control variable θ(t) is defined such that it has a one-to-one correspondence to ε(t). Therefore, the goal of designing ε(t) is equivalent to designing θ(t). Finally, in Eq. <ref>, the formula for the leakage error is written in terms of the Fourier transform of dθ̃/dτ in the nonlinear time frame τ. Designing a flux pulse further transforms into finding a trajectory dθ̃/dτ, which we refer to as “control trajectory design.” In the following sections, we denote g̃(τ) := dθ̃/dτ for brevity. The discrete form of g̃(τ) is denoted as g̃[n]. The design pipeline is presented below. We first design g̃(τ) and obtain θ̃(τ). Second, we compute θ(t) through the inverse of the nonlinear time-frame transformation. Third, we obtain ε(t) according to Eq. <ref> with proper values of ϵ_ini (θ_ini) and ϵ_mid (θ_mid). Then, we convert ε(t) to ω_1(t) and finally to Φ_ext(t) according to Eq. <ref>: g̃(τ) →θ̃(τ) →θ(t) →ε(t) →ω_1(t) →Φ_ext(t) . §.§.§ Constraint on duration Denoting g̃(τ) = dθ̃/dτ, we can rewrite Eq. <ref> as P_e = |∫g̃(τ)e^-iΔτdτ|^2/4 = |G(iΔ) |^2/4 . where G(iΔ) is the Fourier transform of g̃(t) evaluated at ω_τ = Δ. With the goal of implementing a high-fidelity CPHASE gate, there are three key quantities: phase accumulation, leakage error, and gate duration. Phase accumulation is directly related to θ(t) and thus θ̃(τ), as discussed in Appendix <ref>. Given a particular shape of θ̃(τ), we can always obtain a desired phase accumulation by fine-tuning the amplitude. For now we assume a normalized amplitude as will be explained later in this section for the purpose of analysis, and we will generalize in simulation. We thus focus on the two remaining interrelated factors: leakage error and gate duration. We are interested in designing a g̃(τ) with as short a duration as possible, given some acceptable leakage error threshold. Here the parameters to be designed are the pulse shape and duration of g̃(τ). In addition, in cases where g̃(τ) has the same shape but a longer duration, we desire that the leakage error remain below the threshold. This makes intuitive sense, because a longer g̃(τ) corresponds to a slower evolution and therefore should induce no more (and often less) leakage error. Let τ_d be the duration of g̃(τ). We consider time-symmetric trajectories θ̃(τ)=θ̃(τ_d-τ) such that θ̃(τ) starts from some initial value θ̃_ini, evolves to some intermediate value θ̃_mid, and then returns to the initial value θ̃_ini. As the derivative of θ̃(τ), g̃(τ) = dθ̃/dτ is anti-symmetric in time τ, i.e., g̃(τ) = -g̃(τ_d-τ). Since θ̃(τ) is symmetric, we have θ̃(τ_d/2)=θ̃_mid, and ∫_0^τ_d/2g̃(τ)dτ = θ̃_mid-θ̃_ini =-∫_τ_d/2^τ_dg̃(τ)dτ. We further impose a normalization constraint on the control trajectory so that ∫_0^τ_d/2g̃(τ)dτ = -∫_τ_d/2^τ_dg̃(τ)dτ =1. Now, g̃(τ) is time-limited to the interval [0,τ_d], i.e., g̃(τ)=0 when τ<0 or τ>τ_d. We consider the leakage error in Eq. <ref>, paying specific attention to |G_τ_d(iΔ)|, which is the magnitude of the Fourier transform of g̃(τ) of duration τ_d evaluated at Δ. The problem can be stated as follows: Statement 1: Given an error threshold P_e ≤γ^2/4, i.e., |G(iΔ)| ≤γ, find the g̃(τ) of duration τ_d^* with τ_d^* = min(τ_dc), where τ_dc is defined such that for any τ_d ≥τ_dc, |G_τ_d(iΔ)| ≤γ is satisfied. §.§.§ Time-frequency transformation: constraint on frequency We transform the problem statement into one that is more readily addressable, utilizing the time and frequency scaling property of the Fourier transform. We first introduce a set 𝒜 with uncountably many elements corresponding to an infinite number of control trajectory shapes and use g̃^a(τ) with a ∈𝒜 to denote a control trajectory confined in time to the interval [0,1], i.e., g̃^a(τ)=0, when τ<0 or τ>1. We further impose an additional symmetry requirement on the pulse, such that we have ∫_0^1/2g̃^a(τ) dτ = - ∫_1/2^1 g̃^a(τ) dτ = 1. We refer to g̃^a(τ) as a normalized pulse shape labelled by a. For any g̃(τ) of duration τ_d, there must exist some a ∈𝒜 such that g̃(τ) = g̃^a(τ/τ_d)/τ_d. With the time and frequency scaling property of the Fourier transform, we have G_τ_d(iΔ) = G^a(iΔτ_d), where G^a(iΔτ_d) is the Fourier transform of g̃^a(τ) evaluated at Δτ_d. We denote ω = Δτ_d. Note that in the expression G^a(iΔτ_d), Δ and τ_d are nominally on an equivalent footing. Therefore, it would be equivalent to construct the problem with τ_d given and Δ varied and to be minimized, instead of fixing Δ and minimizing over τ_d. For convenience, we set τ_d=1 without loss of generality. In this way, we reformulate Statement 1 into: Statement 2: Given an error threshold P_e ≤γ^2/4, i.e., |G^a(iω)| ≤γ, find a trajectory shape g̃^a(τ) such that ω^* is minimized, where ω^* is defined as the minimum frequency such that for any ω≥ω^*, |G^a(iω)| ≤γ is satisfied. §.§.§ Continuous time to discrete time transformation The problem formulation so far has been stated in continuous time. However, the pulses are represented in discrete time when we perform simulation. Also, in experiments, a discrete-time pulse needs to be specified for the digital controller of the pulse-generation hardware (e.g., arbitrary waveform generator), followed by a certain interpolation scheme in order to output a continuous-time pulse. Therefore, we consider the design in discrete time. Let F_s= 1/T_s be the sampling frequency and T_s be the sampling period. Then we have g̃[n] = g̃(nT_s) for n=0,1,…,N-1 where N-1 =⌊τ_d/T_s ⌋ with ⌊ x ⌋ denoting the greatest integer less than or equal to x. We refer to N as the length of g̃[n]. In this work, g̃(τ) is time-limited by definition. Fortunately, as we show in the Section <ref>, the frequency spectrum of g̃(τ) of interest tends to zero relatively quickly as frequency increases. Therefore, with high enough sampling frequency, the problem of aliasing can be maintained at a minimal level. We recast the problem formulation in discrete time in a complete form as follows: Statement 3: Determine an anti-symmetric control trajectory g̃[n] of length N, where g̃[n] is normalized: * g̃[n]=-g̃[N-n], specifically, g̃[(N-1)/2]=0 for odd N, * g̃[n]=0 when n<0 or n>N-1, * ∑_0^N/2-1g̃[n]=1=-∑_N/2^N-1g̃[n] for even N, or ∑_0^(N-1)/2g̃[n]=1=-∑_(N-1)/2^N-1g̃[n] for odd N, such that ω^* is minimized, where ω^* is defined as ω^* = min(ω_c) such that for any ω≥ω_c, |G(e^iω)| ≤γ, where G(e^iω) is the discrete time Fourier transform of g̃[n] and γ is given. § FINITE-LENGTH, DISCRETE-TIME PULSES In order to establish some background information on the pulse design problem, we introduce the definition and notation of finite-length, discrete-time pulses. We first review the Slepian pulses. We further introduce another set of pulses designed using the weighted Chebyshev approximation (WCA), which are referred to as the Chebyshev pulses II. §.§ Definition and notation In this work, we focus on the design of finite-length, discrete-time pulses, which we will refer to as pulses for brevity going forward. The pulses take on certain values over some chosen finite-length, discrete-time interval and are zero-valued outside the interval, defined mathematically as w[n]=ŵ[n], 0≤ n≤ N-1 0, otherwise , where ŵ[n] denotes the values over the interval [0,N-1], and N is a finite positive integer. The discrete-time Fourier transform of the pulse w[n] is W(e^iω)= ∑_n=0^N-1w[n]e^-iω n . §.§ Examples of common finite-length, discrete-time pulses In this section, we briefly summarize two examples of pulses—the Slepian pulses and the Chebyshev pulses that we will compare. Appendix <ref> contains more details about the mathematical structure of these pulses. The Slepian pulses are a set of orthogonal functions that are optimized to have maximum energy concentration in the frequency or time domains. As we discussed in Section <ref>, Slepian-based pulses are commonly used in superconducting circuits to perform high-fidelity baseband flux controlled CZ gates <cit.>. We denote the Slepian pulses by {v_n^(k)(N,W), k=0,1,…,N-1}, where n=0,1,…,N-1 is the index of the pulse, k is the order of each pulse, and N and W are parameters referred to as the length and mainlobe width of the pulse, respectively. In our problem formulation, we are especially interested in the second Slepian pulse (k=1), because it is an anti-symmetric pulse by definition and has the largest energy concentration among all anti-symmetric Slepian pulses. We denote the second Slepian pulse (k=1) as w_sl2^NW[n]. We omit NW going forward for brevity when it is given in context. Chebyshev pulses minimize the mainlobe width given a specified sidelobe amplitude. They are characterized by their ability to provide a trade-off between mainlobe width and sidelobe amplitude, making them useful in applications such as filter design and spectrum analysis. Referred to as Chebyshev pulses I and denoted as w_ch1[n], these pulses are symmetric in time. In our exploration, we introduce a complementary, anti-symmetric variation named Chebyshev pulses II, denoted as w_ch2^β[n]. This variation is derived through weighted Chebyshev approximation (WCA), a technique for optimizing a polynomial approximation of a given function. Here, β serves as the parameter input for this approximation process. For brevity, we omit β going forward. Notably, Chebyshev pulses II share the equiripple sidelobe amplitude characteristic and exhibit only one ripple in the passband, mirroring the traits of Chebyshev pulses I. Further insights about the weighted Chebyshev approximation are summarized in Appendix <ref>. In the rest of the paper, we focus on the second Slepian pulses (k=1) and the Chebyshev pulses II, and refer to them as the Slepian pulses and the Chebyshev pulses respectively. § COMPARISON BETWEEN CHEBYSHEV- AND SLEPIAN-BASED TRAJECTORIES We have formulated the CPHASE gate design problem into a pulse design problem and further transformed it into the design of a control trajectory g̃[n]. Special attention is paid to the comparisons between the Chebyshev-based trajectories and the Slepian-based trajectories that will be defined in this section. The control trajectory g̃[n] needs to satisfy a normalization constraint, and also is specified to be anti-symmetric according to our discussion in Statement 3. The finite length constraint is naturally satisfied by finite-length, discrete-time pulses. The anti-symmetry constraint g̃[n]=-g̃[N-n] is also satisfied using the Slepian pulses w_sl2[n] and the Chebyshev pulses w_ch2[n]. Therefore, the control trajectories can be defined through a straightforward normalization of w_sl2[n] and w_ch2[n]. We denote the corresponding control trajectories by g̃_sl2[n] and g̃_ch2[n]. We propose the Chebyshev-based trajectory g̃_ch2[n] as an alternative to the Slepian-based trajectory g̃_sl2[n]. The major characteristic difference of the two control trajectories is that g̃_ch2[n] has equiripple sidelobe amplitude for all sidelobes, while g̃_sl2[n] has decreasing sidelobe amplitude. The argument is that we can allow higher sidelobe amplitudes in larger frequency components as long as they stay below a specified threshold, and therefore we can in turn decrease the concentration in smaller frequency components. In this way, g̃_ch2[n] can be designed to obtain a smaller ω^* while maintaining a sidelobe amplitude below some threshold. Here, ω^* denotes the smallest attainable frequency under certain constraints given some leakage threshold, as we specify in Statement 3. This eventually leads to the fact that the Chebyshev-based trajectory can be designed to be shorter than the Slepian-based trajectory. We show an example of designing the Chebyshev-based trajectory and the Slepian-based trajectory according to the given leakage error threshold. First, we determine the length N=1001 for both trajectories to be compared. Then we choose a half bandwidth NW=2.9 to determine the Slepian-based trajectory g̃_sl2[n] as a benchmark pulse. We then design g̃_ch2[n] so that its sidelobe amplitude γ_ch is such that γ_ch≤γ, where γ^2/4=10^-6.0 is the given leakage error threshold. As depicted in Figs. <ref>a-b, we compare the time-domain and frequency-domain representations of g̃_ch2[n] and g̃_sl2[n]. The dashed blue box area in Fig. <ref>b (expanded in Fig. <ref>c) shows that ω^*_ch2<ω^*_sl2. While satisfying the restriction on sidelobe amplitude, g̃_ch2[n] outperforms g̃_sl2[n] and features a smaller ω^*. Note that there exist impulses at both endpoints of the g̃_ch2[n] in Fig. <ref>a, which is a feature that contributes to the equiripple property in the frequency domain. The effect of these impulses will be filtered through numerical integration and interpolation when we transform g̃_ch2[n] to ε_ch2(t). § SIMULATION RESULTS We show time-domain simulation results of a CZ gate, utilizing the Slepian-based trajectories and the Chebyshev-based trajectories as defined in Section <ref>. §.§ Setting and procedure We consider two capacitively coupled transmon qubits, one of which is flux-tunable (QB1) and the other having a fixed frequency (QB2). The system Hamiltonian is H = ∑_i=1,2(ω_i a^†_i a_i+ α_i/2a^†_i a^†_i a_i a_i) +g(a^†_1 + a_1)(a^†_2 + a_2) , where a^†_i, a_i are the raising (creation) and lowering (annihilation) operators in the eigenbasis of the corresponding qubit, ω_i is the qubit frequency of QBi, and α_i is the anharmonicity of QBi. In our simulation, we choose the parameters ω_2 = 4.7 GHz, α_1=α_2=-300 MHz, and g=14.142 MHz. We set ω_1=5.8 GHz initially and tune ω_1 to perform a CZ gate. These parameters form a typical parameter set for transmon qubits and the operation of a CZ gate. In order to perform a CZ gate, we detune ω_1 so that ω_1+ω_2 ≈ 2ω_1+α_1. If we move exactly to the degeneracy point of diabatic states |11⟩ and |20⟩, then we have ω_1 = 5.0 GHz. We vary ω_1 from 5.8 GHz to approximately 5.0 GHz depending on the amplitude of the control pulse, and then back to 5.8 GHz. The procedure of our simulation is as follows: * Determine a control trajectory g̃_i[n]. * For each desired amplitude of the pulse, compute the corresponding control pulse ε_i[n] for g̃_i[n] and use an interpolation of ε_i[n] as a control pulse ε_i(t) to detune ω_1. * Simulate a CPHASE gate using QuTiP <cit.> for a range of desired amplitude and duration of ε_i(t). Calculate the phase accumulation and leakage error as a function of pulse duration and amplitude. * Collect the duration and amplitude pairs that obtain a phase accumulation ϕ=π. This is to ensure the implementation of a CZ gate. Determine the corresponding leakage error P_e as a function of pulse duration. §.§ Simulation examples In the following simulation examples, we define a normalized amplitude A = |ϵ_mid-ϵ_ini|/(5.8-5.0), where A=1 indicates that we go exactly to the degeneracy point of diabatic states |11⟩ and |20⟩, while A=0 indicates that we stay at the starting point. Here, we take 0 ≤ A ≤ 1. §.§.§ A comparison example Figure <ref> shows the leakage error P_e for a CZ gate as a function of pulse duration t_d. Note that the amplitude A of the control pulse is adjusted to ensure an exact phase accumulation of ϕ=π. Figures of phase accumulation and leakage error for a range of control pulse duration and amplitude can be found in Appendix <ref>. As we observe in Fig. <ref>, the leakage error P_e corresponding to g̃_ch2[n] appears generally below that of g̃_sl2[n] for t_d<∼60 ns. Since the general trend the latter decreases is faster than the former as a function of pulse duration t_d, we observe the leakage error P_e corresponding to g̃_sl2[n] begins to appear below that of g̃_ch2[n] for t_d>∼60 ns. This is true for all the simulation examples shown in Appendix <ref>. This feature agrees with what we see in theoretical analysis from Figs. <ref>c-d, except that the general trend of the leakage error P_e corresponding to g̃_ch2[n] also decreases slowly rather than remaining exactly flat. We attribute this difference between simulation and analysis to the numerical integration and interpolation in the process of transforming g̃[n] into ε(t). This reduces the impact of impulses at both ends of g̃_ch2[n] within ε_ch2(t), because the impulses significantly contribute to the equiripple sidelobe characteristic of g̃_ch2[n]. Potential operating points are considered to be the insensitive points to pulse duration, i.e., the “sweet spots” of the leakage error lobes as shown by the green squares and purple dots in Fig. <ref>. In addition, if there is a slight deviation in pulse duration at those points, we would be almost surely have a lower leakage error regardless of the direction of the deviation. We determine the best operating point to be the one with the smallest pulse duration, indicated by the red circle in Fig. <ref>. Since g̃_ch2[n] generally pushes the leakage error lower in the range of relatively smaller pulse duration, we are able to achieve an operating point with lower leakage error while also maintaining a similar or even smaller pulse duration compared to g̃_sl2[n]. The operating points for g̃_sl2[n] and g̃_ch2[n] are t_d = 47.0 ns with P_e = 10^-4.66 and t_d = 46.1 ns with P_e = 10^-4.72, respectively. §.§.§ An aggregate of comparison examples We conduct more comparisons between various pairs of benchmark Slepian-based trajectories and designed Chebyshev-based trajectories. The main difference is that the leakage error threshold are specified differently when designing the trajectories. We summarize the main results in this section. More details about the additional comparisons are shown in Appendix <ref>. We observe similar results to that in Section <ref> for a majority of the comparison examples. We find that g̃_ch2[n] can be designed to achieve a lower leakage error than its counterpart g̃_sl2[n] by roughly 6% in certain cases, while also maintaining smaller pulse duration by an average of 0.6 ns. In several abnormal cases, where there appears an unusually small leakage lobe before the first main leakage lobe, g̃_sl2[n] in fact induces lower leakage error with smaller pulse duration. This indicates that simulation verification is important once a control trajectory is designed because there can be discrepancies between analysis and simulation due to some approximation. §.§ Hardware constraints In this section, we study how hardware constraints, most notably, the sampling frequency and bandwidth of the arbitrary waveform generator (AWG), can affect the performance of the CZ gate. The hardware limitations are important, because the CZ gates we consider are based on fast-flux control, and it is necessary that current hardware be capable of implementing such fast pulses. Let F_s denote the sampling frequency and bw the bandwidth. Examples of state-of-the-art AWGs include QBLOX QCM with F_s=1 GSa/s and bw=400 MHz, Zurich Instrument SHFQC+ with specifications up to F_s=2 GSa/s and bw=800 MHz, QBLOX QCM with F_s=2 GSa/s and bw=800 MHz, Keysight M5300A with baseband sampling frequency F_s=4.8 GSa/s and bw=2 GHz, etc. Details on how we impose the hardware constraints in simulation are summarized in Appendix <ref>. We utilize the same comparison example of g̃_sl2[n] and g̃_ch2[n] as in Section <ref>. Fig. <ref> presents the simulation results of the CZ gate using g̃_sl2[n] and g̃_ch2[n] with different practical hardware parameters. The hardware parameters are listed in Table <ref>. When comparing Fig. <ref> against Fig. <ref>, it becomes evident that there is an overall increase in leakage error, regardless of the control trajectories. The difference between both trajectories also shrinks. As we enhance the hardware parameters, a better resemblance in the performance of the CZ gate to that without hardware constraints is observed. Table <ref> shows an aggregate of the best operating points using g̃_sl2[n] and g̃_ch2[n] in Fig. <ref> following the same argument as in Section <ref>. Comparing the best operating points, we argue that the advantage of g̃_ch2[n] over g̃_sl2[n] can be mostly recovered with F_s=5 GSa/s and bw=2 GHz. Thus, the proposal in this paper is readily implementable using off-the-shelf state-of-the-art hardware. § CONCLUSION AND OUTLOOK In this work, we formulate the problem of baseband flux control design of an adiabatic CPHASE gate in superconducting circuits as a pulse-design problem, and further as a control-trajectory-design problem. Building upon knowledge from other contexts of pulse-design problems, we propose the Chebyshev-based trajectory as an alternative to the widely used Slepian-based trajectory. We then analytically show the advantage of the Chebyshev-based trajectory by using a two-level system abstraction. Furthermore, we compare the performance of the two types of trajectories by numerically simulating a CZ gate in two capacitively coupled transmon qubits. Our simulation results show that the Chebyshev-based trajectory can be designed to induce lower leakage error by roughly 6% in certain cases, while maintaining similar or even smaller pulse duration. We note that in several cases the Slepian-based trajectory induces lower leakage error due to some abnormal phenomenon not existing in analysis. In addition, we study how practical hardware constraints including sampling frequency and bandwidth can influence the performance of the theoretically derived control pulses and affect the advantage of the Chebyshev-based trajectory over the Slepian-based trajectory in simulation. We find that the advantage can be mostly realized using state-of-the-art hardware. The design of the flux pulse plays a crucial role in achieving high fidelity and fast speed for baseband flux-based gates in superconducting circuits. The weighted Chebyshev approximation, a versatile technique devised for tailoring pulses based on specific requirements and constraints, emerges as a valuable tool to design pulse shapes. The perspectives in this research could have broader applicability for pulse-design problems of other types of quantum gates in superconducting circuits and other quantum computing hardware platforms. We gratefully acknowledge insightful conversations with Réouven Assouly, Youngkyu Sung and Junyoung An. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. Additional support is acknowledged from the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704. P.T.B. is exclusively supported by Mitsubishi Electric Research Laboratories (MERL). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government. § THE CPHASE GATE We focus on the adiabatic implementation of CPHASE gate in tunable transmon qubits and describe in detail a common implementation using baseband flux pulses. The CPHASE gate is a two-qubit gate whose operation is represented by the unitary matrix U_CPHASE = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 e^iϕ ]. The CPHASE gate adds a term e^iϕ to the qubits only when both are in the excited state |11⟩. To be more specific, if the original state of the qubits is |00⟩, |01⟩ or |10⟩, the CPHASE gate effectively does nothing. If the original state of the qubits is |11⟩, it will be transformed into e^iϕ|11⟩ after the CPHASE gate operation. One implementation of the CPHASE gate relies on the avoided crossing between states |11⟩ and |20⟩ that occurs when two transmon qubits are coupled to each other. Consider a system of two capacitively coupled qubits as depicted in Fig. <ref>a, where QB1 is a flux-tunable transmon qubit while QB2 is a fixed-frequency transmon qubit. The Hamiltonian - including states with two excitations in addition to the four computational states - can be written in the |00⟩,|01⟩,|10⟩,|11⟩,|02⟩,|20⟩-basis as H = [ E_00 0 0 0 0 0; 0 E_01 g 0 0 0; 0 g E_10 0 0 0; 0 0 0 E_11 √(2)g √(2)g; 0 0 0 √(2)g E_02 0; 0 0 0 √(2)g 0 E_20 ], where E_ij is the energy of state |ij⟩ and g is the coupling strength with a factor of √(n) corresponding to the number of qubit excitations (n=1,2). Note that the frequency of QB1, and therefore the energies E_ij, depend on the external magnetic flux threading the SQUID loop. Fig. <ref>c shows an example of the energy spectrum of the system described by the Hamiltonian in Eq. <ref> as a function of the frequency detuning of QB1. We show energies of states |01⟩,|10⟩,|02⟩,|11⟩,|20⟩. The CPHASE gate is implemented by detuning the frequency of QB1 such that the instantaneoues energy of state |11⟩ follows the trajectory l(t) in Fig. <ref>d. To be more specific, we shift the frequency of QB1, thus in particular the energy of state |11⟩, bringing it into resonance with state |20⟩, which opens an avoided crossing due to the coupling. We then rewind the trajectory and return to the starting point. The trajectory l(t) corresponds to the change of the instantaneous frequency of QB1 as time evolves as shown in Fig. <ref>b. In the adiabatic implementation of the process, we deliberately design l(t) to detune the frequency of QB1 slowly enough so that there is only small leakage from |11⟩ to |20⟩ throughout the whole process. We note that due to the presence of the avoided crossing, the energy of state |11⟩ is pushed lower than would be expected in an uncoupled system. This is the origin of an additional phase accumulation that only occurs for state |11⟩, leading to the conditional phase accumulation. This process can be represented by a unitary matrix in the computational basis U_raw = [ 1 0 0 0; 0 e^iϕ_01 0 0; 0 0 e^iϕ_10 0; 0 0 0 e^iϕ_11 ], where ϕ_ij is the accumulated phase ϕ_ij = ∫_0^t_dω_ij(t)dt , with t_d denoting the duration of the process. The way we shift the frequency of QB1 is by changing the external magnetic flux Φ_ext threading the SQUID loop of QB1. The qubit frequency of QB1 ω_1 as a function of Φ_ext is given by <cit.> ω_1(Φ_ext) = 1/ħ(√(8E_JE_C)√(d^2+(1-d^2)cos^2(πΦ_ext/Φ_0)) -E_C) , where E_J=E_J1+E_J2 is the sum of the Josephson energies of the two junctions, namely E_J1, E_J2. E_C is the charging energy. Φ_0 is the superconducting flux quantum. d is the junction symmetry parameter defined as d= |E_J2-E_J1|/E_J2+E_J1 . In order to obtain the CPHASE gate as in Eq. <ref>, two single-qubit gates R_z(-ϕ_01) and R_z(-ϕ_10) need to be implemented to each qubit to cancel the phase accumulated by states |01⟩ and |10⟩. Therefore, the whole operation can be represented by U_CPHASE' = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 e^iϕ' ], where ϕ'=ϕ_11-ϕ_01-ϕ_10. If there were no coupling between the two qubits, ϕ' = 0. Because of the effect of the coupling between the two qubits, a nonzero phase will be acquired. The way the instantaneoues energy of state |11⟩ is varied determines the value of ϕ'. By choosing a suitable l(t) as depicted in Fig. <ref>d, in principle we can always have ϕ'=ϕ for any arbitrary desired phase ϕ. When ϕ=π, the operation is named a CZ gate. There is an alternative way of implementing a CPHASE as opposed to the adiabatic method, which we refer to as a non-adiabatic implementation. Instead of gradually detuning the frequency of QB1, this approach involves making a sudden transition to the CPHASE operating point near the avoided crossing of |11⟩ and |20⟩. After a waiting period of time t = π/√(2)g, the state undergoes a single Larmor-type rotation from |11⟩ to |20⟩ and then back to |11⟩. During this process, an overall conditional phase accumulation is obtained. § DERIVATION OF FORMULA FOR LEAKAGE ERROR Two approaches to calculating the leakage error are presented based on the discussions in Ref. <cit.>. We first take advantage of the Bloch sphere representation and give an approximate but more intuitive solution from a geometric perspective. Then we go through a mathematical derivation and provide an analytical formula. We will comment on the efficacy of the formula by discussing the relationship of this formula to the more general Landau-Zener formulation <cit.>. §.§ Geometric approach Recall that in Eq. <ref>, θ(t) is defined as θ(t) = arctan (Δ/ε(t)). In Fig. <ref>a we introduce a control vector θ⃗ representing the control variable θ(t) in terms of Δ and ε(t). Correspondingly, in Fig. <ref>b we show an instantaneous basis vector |11'⟩, which represents the ground state of the instantaneous Hamiltonian H as θ(t) varies, in parallel to the control vector θ⃗. As time progresses, we change our frame reference to coincide with the frame whose basis vectors are the eigenstates of the instantaneous Hamiltonian H. We first show how the state evolves in an infinitesimal time δ t. Suppose the angle between the initial state |ψ⟩ at time t_0, represented by the gray Bloch vector in Fig. <ref>b, and the z-axis is θ_0. Ideally, |ψ⟩ is aligned with the instantaneous ground vector |11'⟩ at time t_0. After δ t, a δθ change in the angle between instantaneous ground vector |11'⟩ and the z-axis takes place. If we switch into the new reference frame, the state vector |ψ⟩ deviates from the basis vector by -δθ and therefore starts to precess around the basis vector at frequency ω, where ω refers to the eigenenergy difference of the instantaneous Hamiltonian. Therefore, during the infinitesimal time δ t, the state vector |ψ⟩ will pick up a deviation from the ground vector |11'⟩ by δχ = -δθ e^-iωδ t. Next we consider a series of infinitesimal time δ t's. A simple approach is to move into the reference frame along with the control vector θ⃗ and correspondingly the instantaneous ground vector |11'⟩. Thus, the whole process can be viewed as the state vector |ψ⟩ deviating from the basis vector by a series of -δθ_j's with an angle rotation ϕ_j=∑_i ω_i δ t, which is the accumulated phase up to the j-th δ t. Since the angle rotation is orthogonal to the -δθ_j deviation, and both are sufficiently small in the adiabatic limit, we can accumulate them independently, i.e., χ = ∑_j -δθ_j e^-iϕ_j Fig. <ref>c is a bird’s-eye view plot supposing we stare at and stay in a moving frame with the control vector θ⃗. The vertical axis Imag(χ) coincides with the longitude axis in the x-z plane, while the horizontal axis Real(χ) denotes the latitude axis orthogonal to the x-z plane. The origin represents the control vector θ⃗. We plot the accumulated deviation χ as a function of time according to Eq. <ref>, using a Slepian-based control trajectory. Change the ∑ symbol into the ∫ symbol and the δ symbol into the d symbol, as in elementary calculus, and we have χ = -∫dθ e^-i∫^t ω(t') dt' = -∫dθ/dt e^-i∫^t ω(t') dt'dt . Therefore, the leakage error rate P_e can be calculated as 1 minus the probability of measuring the state |ψ⟩ in the instantaneous ground state |11'⟩ P_e =1-(cosarcsin|χ|/2)^2 =(sinarcsin|χ|/2)^2 ≈ |χ|^2/4 . where the approximation holds valid when χ is sufficiently small. We plug the calculated χ as depicted in Fig. <ref>c into Eq. <ref> and find that the analytical leakage error matches well with the simulation result throughout the process. Note that in this geometric derivation we assume that the changes at different infinitesimal time δ t's can be summed linearly. This approximation holds valid so long as the net overall change χ is small. In fact, in this research we concentrate on adiabatic control, and therefore, we are always interested in small leakage error rate P_e incurred throughout the process. This small P_e corresponds to the fact that χ should be small. §.§ Analytical approach We continue to present an analytical approach to deriving the leakage error. Recall that in Section <ref>, the abstracted two-level system can be described by the Hamiltonian in Eq. <ref>. We have also defined a control variable θ(t) = arctan (Δ/ε(t)). A pictorial representation of the control variable θ(t) in terms of Δ and ε(t) is shown in Fig. <ref>a. Consider a state |ψ⟩ given by |ψ⟩=α_0 |11⟩+β_0 |20⟩, where |11⟩ and |20⟩ denote the basis vectors of the σ_z-basis (z-axis). Suppose there exists another basis which rotates around the y-axis by an angle θ relative to the σ_z-basis (z-axis). In this new basis, the state |ψ⟩ can be rewritten as |ψ⟩ =α|11'⟩+β|20'⟩ , where α =α_0 cosθ/2+β_0 sinθ/2 , β =β_0 cosθ/2-α_0 sinθ/2 , and |11'⟩ and |20'⟩ are the basis vectors of the new basis, which we now refer to as the θ-rotated basis. Correspondingly, we refer to the Bloch sphere with the θ-rotated basis as the θ-rotated Bloch sphere. In the θ-rotated basis, we denote the eigenvalues of the basis states as ±ω/2. In a static (non-rotating) frame, the Bloch vector will precess around the θ-rotated basis axis, which results in a phase induced time derivative α̇ = -iω/2α , β̇ = iω/2β . However, if we let θ(t) varies as a function of time, we will have an additional term in the time derivative of α and β respectively α̇ = -iω/2α + (-α_0 sinθ/2+β_0cosθ/2)(1/2θ̇) = -iωα+βθ̇/2 , β̇ = iω/2β + (-α_0 cosθ/2-β_0 sinθ/2) (1/2θ̇) = iωβ-αθ̇/2 , where we use the “dot” notation ẋ as a shorthand to denote the time derivative of x. Now let us denote α = cosΘ/2 and β = e^iϕsinΘ/2 where Θ and ϕ are the spherical coordinates on the θ-rotated Bloch sphere. Here we have omitted an overall phase. Note that α^∗β = cosΘ/2sinΘ/2 e^i ϕ =sinΘ e^iϕ/2 . It is interesting to see how α^∗β evolves over time. We proceed by taking the time derivative of α^∗β d/dt(α^∗β) = α̇^̇∗̇β+α^∗β̇ = [-iωα+βθ̇/2]^∗β + α^∗iωβ-αθ̇/2 = iωα^∗+β^∗θ̇/2β + α^∗iωβ-αθ̇/2 = iωα^∗β + |β|^2-|α|^2/2θ̇ . Since (|β|^2-|α|^2)^2 = |β|^4 +|α|^4 - 2|β|^2|α|^2 = |β|^4 +|α|^4 + 2|β|^2|α|^2 - 4|α^∗β|^2 = 1-4|α^∗β|^2 , we then have |β|^2-|α|^2 = ±√(1-4|α^∗β|^2) . Substituting Eq. <ref> into Eq. <ref>, we have d/dt(α^∗β) = iωα^∗β±θ̇/2√(1-4|α^∗β|^2) . If we substitute α^∗β = e^iϕsinΘ /2 as in Eq. <ref>, we will have d/dt(sinΘ e^iϕ) = iωsinΘ e^iϕ±θ̇cosΘ . Substituting ϕ = ϕ' + ∫^t ω(t')dt', where ϕ' is some initial reference phase, in Eq. <ref>, we have LHS = d/dt(sinΘ e^iϕ') e^i∫^t ω(t')dt' + iωsinΘ e^iϕ , RHS = iωsinΘ e^iϕ±θ̇cosΘ . Therefore, we can derive d/dt(sinΘ e^iϕ') e^i∫^t ω(t')dt' =±θ̇cosΘ . which can be rewritten as d(sinΘ e^iϕ') = ±θ̇cosΘ e^-i∫^t ω(t')dt'dt . If we integrate both sides of Eq. <ref>, we will have sinΘ e^iϕ' = ±∫cosΘdθ/dte^-i∫^t ω(t')dt'dt . Now we can write the leakage error P_e throughout the whole dynamic process as P_e = |β|^2 = |sinΘ/2|^2 ≈|sinΘ e^iϕ'/2|^2 = |∫cosΘdθ/dte^-i∫^t ω(t')dt'dt |^2/4 ≈|∫dθ/dte^-i∫^t ω(t')dt'dt |^2/4 . Note that Eq. <ref> and Eq. <ref> differ by a factor of cosΘ. Here, the term cosΘ is due to the geometry of the θ-rotated Bloch sphere relative to the original Bloch sphere whose basis vectors are |11⟩ and |20⟩. Since the whole process is performed adiabatically, the error is sufficiently small and hence Θ is sufficiently small, and therefore, the approximations in Eq. <ref> are valid. §.§ Relationship to the Landau-Zener formula Consider a two-level system described by the Hamiltonian in Eq. <ref> and the energy diagram shown in Fig. <ref>. Consider that the system is initially prepared in state |ψ_-⟩ with ε(t) → -∞. Then ε(t) increases in time and sweeps through the avoided crossing and eventually ε(t) → +∞. According to the Landau-Zener probability of transition <cit.>, we can derive the probability that the system will undergo a transition to |20⟩ for the simple case where ε(t) = α t with α being a positive constant P_LZ = e^-πΔ^2/ 2 α . In Eq. <ref>, we do not assume that ε(t) = α t increases linearly with time. However, if we were to make this assumption, we could compute the transition probability P_eLZ from Eq. <ref> and compare it to P_LZ in Eq. <ref>. We further let that ω(t)=Δ to simplify the comparison, i.e., P_e = |∫dθ/dte^-iΔ tdt |^2/4 . First we compute the time derivative of θ(t) dθ/dt = 1/1+(Δ/α t)^2×-Δ/α t^2 = -Δ/α t^2+Δ^2/α = -Δ/α/t^2+(Δ/α)^2 . Then we compute the integral within |·| in Eq. <ref> and substitute Δ=Δ/ħ (let ħ=1) ∫dθ/dte^-iΔ tdt =∫-Δ/α/t^2+(Δ/α)^2 e^-iΔ tdt =-π e^-ΔΔ/α =-π e^-Δ^2/α . Therefore, we have P_eLZ = π^2/4e^-2Δ^2/α . The relationship between P_LZ in Eq. <ref> and P_eLZ in Eq. <ref> can be written as log P_LZ = π/4(log P_eLZ - C) . where C=log (π^2/4) is some constant. § NONLINEAR TIME FRAME TRANSFORMATION We review a technique as proposed in Ref. <cit.>, which we term as nonlinear time-frame transformation. We note that in Ref. <cit.>, the authors did not consider a time frame transformation. Recall that in Eq. <ref> we have landed on the leakage error with valid approximations P_e = |∫dθ/dte^-i∫^t ω(t')dt'dt |^2/4 . This formula for leakage error is quite complicated as there exists another integral within an integral and one of the integrals is itself an imaginary exponent. However, in the event that ε(t) only changes slightly, i.e., max_t|ε(t)|-min_t|ε(t)|≈ 0, ω(t') ≈ω_x with ω_x a constant. We can therefore make a further approximation P_e = |∫dθ/dte^-iω_x tdt |^2/4 . The term inside |·| in Eq. <ref> is nothing but the Fourier transform of dθ/dt evaluated at ω_x. This is great because we have now a very simple evaluation of the leakage error in terms of the control trajectory dθ/dt during the process. In order to generalize the simple form to an arbitrary ε(t), a nonlinear time frame τ is introduced, where at any time t ω_τdτ = ω(t)dt , with ω_τ a constant. Clearly, τ=τ(t) is some nonlinear function of t. Plug Eq. <ref> into Eq. <ref> and we have P_e = |∫dθ̃/dτe^-i∫^τω_τdτ'dτ|^2/4 = |∫dθ̃/dτe^-iω_ττdτ|^2/4 . In this way, we achieve a simple form of the leakage error as in Eq. <ref>, and can design dθ̃/dτ and hence θ̃(τ) in the nonlinear time frame τ using approaches proposed in Section <ref> and <ref>. In Eq. <ref>, if we rearrange the terms by dividing both sides by ω(t), we have dt = ω_τ/ω(t)dτ = ω_τ/ω(τ)dτ , where we change the variable of ω(·) from t to τ. Integrate both sides and we have t(τ) = ∫_0^τdt = ∫_0^τω_τ/ω(τ')dτ' . If we further set ω_τ = Δ, we will have ω_τ/ω(τ') = Δ/ω(τ') = sinθ̃(τ') , where θ̃(τ) is already known by design. Then we compute t(τ) = ∫_0^τsinθ̃(τ')dτ' . Now that we have θ̃(τ) and t(τ), we can numerically solve for θ(t) in the original time frame t. § VALIDITY OF EQ. <REF> We evaluate the validity of using Eq. <ref> with the nonlinear time-frame transformation as discussed in Appendix <ref> for the leakage error. We proceed by comparing the analytically calculated leakage error P_e-ana and the leakage error P_e-sim by simulating a two-level system, using examples of the Slepian-based trajectory and the Chebyshev-based trajectory. In Figs. <ref>a-b, we show the time-domain representations of two control trajectories, namely an example of the Slepian-based trajectory and an example of the Chebyshev-based trajectory. We first calculate the analytical P_e-ana using Eq. <ref> considering the nonlinear time-frame transformation. The calculation is performed for a range of pulse duration. Then we perform a CZ gate type simulation (except that we do not consider the accumulation of a certain phase) using a two-level system and keep track of the leakage error throughout the process. The simulation is also performed for a range of pulse duration. In Figs. <ref>c-d, we show the comparison of the analytically calculated leakage error P_e-ana and simulated leakage error P_e-sim for the two control trajectories in Figs. <ref>a-b respectively. We observe that P_e-sim (Slepian) manifests a feature of monotonously decreasing sidelobes as predicted by P_e-ana (Slepian), while P_e-sim (Chebyshev) characterizes relatively flat (slightly decreasing) sidelobes which should be exactly equiripple as predicted by P_e-ana (Chebyshev). In addition, the sidelobes of P_e-ana and P_e-sim of both examples oscillate at a very close, if not exactly the same, frequency, which indicates the validity of Eq. <ref> in the sidelobe region. However, the validity of Eq. <ref> in the mainlobe region appears compromised. This is because we set Δ=50 MHz, and when the pulse duration is less than approximately 1/Δ=20 ns, the whole process is essentially not in the adiabatic limit. This feature is not of too much concern since what we really care about is the sidelobe characteristic when designing and comparing the Slepian and the Chebyhsev trajectories so that we can push for a faster gate while keeping the process in the adiabatic regime. Also, we should note that the absolute value of P_e-ana is not meaningful since it is calculated using a normalized control trajectory. We require control trajectories to be subject to the same normalization as discussed in Seciton <ref> to make sure the comparison is valid. § FORMULA FOR ACCUMULATED PHASE As discussed in Section <ref>, the primary characteristic of the CPHASE gate is to accumulate some phase ϕ. We derive a formula for the accumulated phase ϕ in the abstracted two-level system. Recall the Hamiltonian defined in Eq. <ref> and the eigenenergies defined in Eq. <ref>. Now let Δ=0 such that no coupling exists between the two levels. The eigenenergies are E_11'=-ε(t)/2 and E_20'=ε(t)/2, as depicted by the dashed lines in Fig. <ref>. The difference between the eigenenergies of the ground state Δ E = E_11-E_11' with and without coupling is what determines the phase accumulation in the CPHASE gate. We can recast the eigenenergy difference in terms of θ(t) Δ E = E_11-E_11' = 1/2(ε(t)-√(ε(t)^2+Δ^2)) = - Δ/2tanθ(t)/2 . The accumulated phase ϕ is the integral of the energy difference Δ E through the trajectory ϕ= ∫Δ E dt = - ∫Δ/2tanθ(t)/2dt . By designing the shape of the control trajectory for θ(t), we can in principle apply an arbitrary CPHASE gate. § THE SLEPIAN PULSES AND THE CHEBYSHEV PULSES I AND II §.§ The Slepian pulses The Slepian pulses, also known as discrete prolate spheroidal sequences (DPSSs), are a set of orthogonal pulses intended for the problem of maximal concentration in both the time domain and the frequency domain. Heisenberg’s uncertainty principle <cit.> implies that pulses cannot be confined in both the time domain and the frequency domain. It is then natural to ask: how to optimally concentrate the energy in one domain if the pulse is strictly confined in the other domain. This problem, both in continuous time and in discrete time, was pursued and solved by Slepian, Landau, and Pollack <cit.>. Here we briefly review the development and analysis of the Slepian pulses for the discrete-time case. Consider a finite-length, discrete-time pulse x[n], n=0,1,…,N-1, which is specified to have finite energy, i.e., E = ∑_n=0^N-1|x[n]|^2<∞ , where E denotes the energy of the pulse x[n]. The discrete-time Fourier transform of the pulse x[n] is given by X(e^iω)=∑_n=0^N-1x[n]e^-iω n . Let 0 < W < π. The ratio λ∈ [0,1] that measures the percentage of the energy contained in the frequency band [-W,W] over the total energy is defined as λ = ∫_-W^W|X(e^iω)|^2 dω/∫_- π^π|X(e^iω)|^2 dω . The goal is to find the pulse x[n] that maximizes λ for all pulses x[n], n=0,1,…,N-1 of length N. The Slepian pulses {v_n^(k)(N,W), k=0,1,…,N-1} are the solutions to the optimization problem stated above <cit.>, where n=0,1,…,N-1 is the index of the pulse, k is the order of each pulse, and N and W are parameters referred to as the length and mainlobe width of the pulse, respectively. The Slepian pulses can be derived from the real solutions to the system of equations ∑_m=0^N-1sin2π W (n-m)/π (n-m)v_m^(k)(N,W) = λ_k(N,W)v_n^(k)(N,W) , for n, m=0,1,…,N-1 and k=0,1,…,N-1. When n=m, this simplifies to sin2π W (n-m)/π (n-m)=2W . Eqs. <ref> can also be written in the matrix form A v^(k) = λ_k v^(k) , where A_n,m=sin2π W (n-m)/π (n-m) , v^(k) = [v_0^(k)(N,W),v_1^(k)(N,W),…,v_N-1^(k)(N,W)]^T . Eq. <ref> is an eigenvalue problem, where λ_k's are the N distinct eigenvalues and v^(k)'s are the corresponding eigenvectors. By convention the eigenvalues are ranked as 1>λ_0>λ_1>…>λ_N-1>0. Therefore, the sequence v_n^(0)(N,W) that corresponds to the largest eigenvalue λ_0 is referred to as the first Slepian pulse. Each successive Slepian pulse maximizes λ while being orthogonal to the Slepian pulses preceding it. The time and frequency-domain representations of the first and second Slepian pulses for N=25 and NW=3 are shown in Figs. <ref>a-b. The magnitude of the Fourier transform is normalized to be 1 at ω=0 for the first Slepian pulse, while for the second Slepian pulse, it is normalized so that the peak magnitude of the mainlobe is 1. Compared to the rectangular pulse and raised cosine pulses, we find that the Slepian pulses have a relatively low sidelobe amplitude and small mainlobe width, which makes them a good candidate when a compromise between the sidelobe amplitude and mainlobe width is required. To maintain consistent notation, we denote the first Slepian pulse (k=0) as w_sl1^NW[n]= v_n^(0)(N,W), 0≤ n≤ N-1 0, otherwise , and the second Slepian pulse (k=1) as w_sl2^NW[n]= v_n^(1)(N,W), 0≤ n≤ N-1 0, otherwise , where the superscript NW may be omitted when it is given in context. §.§ The Chebyshev pulses Dolph formulated and solved the problem of finding a pulse that minimizes the mainlobe width given a specified sidelobe amplitude (or vice versa), in the context of antenna array design <cit.>. The optimal solution to this problem is known as the Chebyshev pulse. The Chebyshev pulse is based on the Chebyshev polynomials of the first kind defined as T_n(x)=cos(n arccos(x)) |x|≤ 1 cosh(n arccosh(x)) x≥ 1 (-1)^ncosh(n arccosh(-x)) x≤ -1 , where n denotes the order of the Chebyshev polynomials. Plugging in the values n=0 and n=1, we have T_0(x)=1 and T_1(x)=x. Using the double angle trigonometric identity, i.e., cos2θ=2cos^2θ-1 or cosh2θ=2cosh^2θ-1, the following recurrence relation can be verified T_n(x)=2xT_n-1(x)-T_n-2(x), n ≥ 2 . It can be further shown that T_n(x) is an nth-order polynomial in x, i.e., T_n(x) can be equivalently written as the ordinary polynomial T_n(x)=∑_k=0^nb[k]x^k , for some coefficients b[k], k=0,1,…,n. T_n(x) is even or odd according to whether n is even or odd. T_n(x) oscillates between -1 and 1 when -1≤ x≤ 1 and is monotonic when x≥1 or x≤ -1. The Chebyshev pulse w_ch1[n] can be defined through its Fourier transform W_ch1(e^iω) = e^-iωN-1/2T_N-1(x_0cos(ω/2))/T_N-1(x_0) , where N denotes the length of the pulse, and x_0>1 is a parameter related to the sidelobe amplitude of W_ch1(e^iω). Let ω_s be such that x_0cos(ω_s/2)=1. As ω increases from 0 to ω_s, the argument of the numerator in Eq. <ref>, i.e., x_0cos(ω/2), decreases from x_0 to 1, and thus W_ch1(e^iω) decreases from 1 to 1/T_N-1(x_0):=r. As ω increases from ω_s to π, W_ch1(e^iω) will oscillate between -r and r. Utilizing trigonometric identities and considering that T_n(x) is an nth-order polynomial in x, it can be shown that Eq. <ref> can further be written in a more structured form W_ch1(e^iω) = ∑_n=0^N-1w_ch1[n]e^-iω n , where w_ch1[n], n=0,1,…,N-1 are the coefficients of the Chebyshev pulse. The Chebyshev pulse coefficients can also be evaluated from the inverse Fourier transform of Eq. <ref>. The explicit analytical formula is given by w_ch1[n]=1/N[ 1+2r∑_k=0^N_s(-1)^k T_N-1(x_0 cosπ k/N) cos(2π k/L(n+1/2))], n=0,1,…,N-1 , where r=1/T_N-1(x_0) is as defined earlier, and N_s=N-1/2 N odd N/2-1 N even . The time-domain and frequency-domain representations of the Chebyshev pulses for N=25 and different specified sidelobe amplitudes (10^-3 for Chebyshev I, 1 and 10^-4 for Chebyshev I, 2) are shown in Fig. <ref>c-d. The magnitude of the Fourier transform is normalized to be 1 at ω=0. One important characteristic of the Chebyshev pulse is the equiripple sidelobe amplitude for all sidelobes. From Fig. <ref>d, we can observe that as the sidelobe amplitude of the Chebyshev pulse is specified to be lower, its mainlobe width will be larger. In Appendix <ref>, we will show that the Chebyshev pulse is a special case of the result of the weighted Chebyshev approximation. In the following sections, when it is necessary to discriminate between the Chebyshev pulse discussed in this section and the Chebyshev pulse II to be introduced in Section <ref>, we will refer to the Chebyshev pulse as the Chebyshev pulse I to avoid confusion. §.§ The Chebyshev pulses II We define an anti-symmetric counterpart of the Chebyshev pulse I w_ch1[n] which we refer to as the Chebyshev pulse II w_ch2^β[n], using the weighted Chebyshev approximation, a method for finding a polynomial that best approximates a given function in a weighted sense. Here, β denotes the parameters that we feed into the weighted Chebyshev approximation problem. See Appendix <ref> for more details. For simplicity, we will omit β. The Chebyshev pulse II shares the same characteristics of equiripple sidelobe amplitude and only one ripple in the passband as the Chebyshev pulse I. The time-domain and frequency-domain representations of an example of the Chebyshev pulse II for N=25 are shown in Figs. <ref>e-f. The magnitude of the Fourier transform is normalized so that the peak magnitude of the mainlobe is 1. We note a special feature of the Chebyshev pulses I and II. The equiripple property in the frequency domain is enforced by the specifications of the Chebyshev pulses. Nonetheless, it carries the potential drawback of introducing “impulses” at the window endpoints. For example, in Figs. <ref>c, <ref>e, both endpoints of each of the Chebyshev pulses I and II are not approaching zero. The same is true for the first and second Slepian pulses, i.e., their endpoints are not zero either. In other cases, such as what we show in Section <ref>, this feature can be more notable. For analysis purposes, it is an important feature because this is the primary reason for the equiripple property. However, as we transform the Chebyshev pulses II into the change of qubit frequency in simulation, the effect of these “impulses” will diminish due to interpolation and integration. For example, Figs. <ref>a-b show a Chebyshev-based trajectory and its corresponding ε_ch2[n], which is the control pulse for the CPHASE gate. The “impulses” do not manifest in ε_ch2[n]. This is good because we cannot implement sharp jumps in frequency changes of qubits. § WEIGHTED CHEBYSHEV APPROXIMATION We review the basics of the weighted Chebyshev approximation (WCA) in the context of finite-length, discrete-time pulse design. Let h[n], n=0,1,…,N-1, be a real-valued finite-length, discrete-time pulse of length N defined over the discrete-time interval 0 ≤ n ≤ N-1. The Fourier transform of h[n] is H(e^iω)=∑_n=0^N-1h[n]e^-iω n . H(e^iω) can also be written in terms of its amplitude and phase H(e^iω) = A(ω)e^iϕ(ω) , where A(ω) and ϕ(ω) are real-valued functions of ω. We further require that h[n] be symmetric or anti-symmetric. Here, when h[n] is referred to as being symmetric, it means h[n]=h[N-1-n], n=0,1,…,N-1 . Similarly, when h[n] is referred to as being anti-symmetric, it means h[n]=-h[N-1-n], n=0,1,…,N-1 . Depending on the value of N being odd or even and h[n] being symmetric or anti-symmetric, there exist four cases of pulses h[n]. With the symmetry constraints, it can be shown that ϕ(ω) can be written in the form of ϕ(ω) = C+Bω, which is a linear function of ω, where C and B=-N-1/2 are real-valued. Therefore, the Fourier transform of the four cases of pulses can be written in the form H(e^iω) = A(ω) e^iCe^iBω . Values of C and forms of A(ω) are given in Table <ref>. Note that the forms of A(ω) are either a sum of cosines or sines, with the argument being either ω n or ω (n-1/2). Utilizing basic trigonometric identities, the forms of A(ω) for all four cases can be rewritten in the form A(ω)=Q(ω)P(ω), where Q(ω) is specific to each case and P(ω) is always a sum of cosines. Forms of Q(ω) and P(ω) are given in Table <ref>. Having established the notations, the Chebyshev approximation problem may be stated as follows. Given a disjoint union of frequency bands of interest ℱ⊂ [0,π], a desired function D(ω) defined and continuous on ℱ, a positive weighting function W(ω) defined and continuous on ℱ, and a desired choice of one of the four cases of h[n], the minimum of the following quantity ||E(ω)||:=max_ω∈ℱ W(ω)|D(ω)-A(ω)| , and the corresponding h[n] are desired. Here, E(ω):=W(ω)|D(ω)-A(ω)| is referred to as the weighted approximation error and the optimization problem is a minimax problem of E(ω). Considering we have the form A(ω)=Q(ω)P(ω), we can rewrite the weighted approximation error as E(ω) =W(ω)|D(ω)-A(ω)| =W(ω)|D(ω)-Q(ω)P(ω)| =W(ω)Q(ω)|D(ω)/Q(ω)-P(ω)| . Note that Eq. <ref> is valid except possibly at ω = 0 or π. To avoid those scenarios where Q(ω)=0, it suffices to restrict that ℱ⊂ [0,π) for Case 2 problems, ℱ⊂ (0,π) for Case 3 problems, and ℱ⊂ (0,π] for Case 4 problems. Let Ŵ(ω)=W(ω)Q(ω) and D̂(ω)=D(ω)/Q(ω), and we have E(ω)=Ŵ(ω)|D̂(ω)-P(ω)| . With the form of weighted approximation error in Eq. <ref>, one algorithmic solution to the above mentioned problem makes use of the alternation theorem, the Remez exchange algorithm, and/or the Parks-McClellan algorithm <cit.>. Other solutions make use of linear programming with additional constraints <cit.>. We refer the readers to the included references for more details. The solution for designing pulses in the minimax sense as in Eq. <ref> is often in a numerical form without explicit analytical form. However, with a special set of ℱ⊂ [0,π], D(ω) and W(ω), and with h[n] specified to be Case 1 or 2, the solution coincides with the Chebyshev pulse I discussed in Section <ref>. In other words, the Chebyshev pulse I is a special case in the weighted Chebyshev approximation problem. We refer the readers to Chapter 3 of Ref. <cit.> for more details. Figs. <ref>a-b show the time-domain and frequency-domain representations of an example of using the weighted Chebyshev approximation (WCA) to design a pulse, which coincides with the Chebyshev pulse I with sidelobe amplitude specified to be 10^-3 in Fig. <ref>c. Note that there is only one ripple in the passband, which is otherwise referred to as the mainlobe. This is also one of the reasons why we name the Chebyshev pulses II, since they are an anti-symmetric counterpart of the Chebyshev pulses I, and both can be considered as a special result of the weighted Chebyshev approximation problem. If we provide an appropriate set of ℱ⊂ [0,π], D(ω) and W(ω), but specify h[n] to be Case 3 or 4, the optimal solution to the weighted Chebyshev approximation problem will be an anti-symmetric counterpart of the Chebyshev pulse I w_ch1[n], which we refer to as the Chebyshev pulse II w_ch2^β[n], where β = [ℱ, D(ω), W(ω)]. Note that it is not necessarily true that the solution given by the weighted Chebyshev approximation will always be an instance of the Chebyshev pulse II, for any set of ℱ⊂ [0,π], D(ω) and W(ω). In order to find an appropriate Chebyshev pulse II w_ch2[n], the parameters need to be properly chosen. In Ref. <cit.>, the authors demonstrate the design process of w_ch2[n] through an illustrative example. § PHASE ACCUMULATION AND LEAKAGE ERROR OF THE EXAMPLE IN SECTION <REF> Figures <ref>a, <ref>c present the phase accumulation and leakage error for a range of control pulse duration t_d and amplitude A using the g̃_sl2[n] shown in Fig. <ref>a. As a comparison, Figs. <ref>b, <ref>d present the phase accumulation and leakage error for a range of control pulse duration t_d and amplitude A using the same g̃_ch2[n] as shown in Fig. <ref>a. Following the procedure in Section <ref>, we first find all the amplitude and duration pairs that result in the phase accumulation ϕ=π, described by the red dashed curve in Figs. <ref>a-b. Then we determine the corresponding leakage error data points described by the yellow dashed curve in Figs. <ref>c-d. § ADDITIONAL SIMULATION RESULTS We present additional simulation comparison results between different pairs of benchmark Slepian-based trajectories g̃_sl2[n] and Chebyshev-based trajectories g̃_ch2[n] in Fig. <ref>. We follow the simulation and analysis procedure as discussed in Section <ref>. In all these examples, we can see that g̃_ch2[n] pushes the leakage lower in the range of smaller pulse duration while sacrificing a higher leakage error in the range of larger pulse duration. The best operating points with shortest pulse duration are indicated by green squares and purple dots. Note that in Figs. <ref>g-h, we observe an abnormal small lobe appearing before the common leakage error lobe. Table <ref> shows an aggregate of comparisons of best operating points following the same argument as in Section <ref>. § DETAILS ON SIMULATING HARDWARE LIMITATIONS In Eq. <ref> we present the design pipeline from g̃(τ) to ω_1(t) and finally to Φ_ext(t). In experiments, Φ_ext(t) is induced by passing an electric current through a flux line, which is connected to an antenna positioned in proximity to the target qubit and linked inductively to its SQUID loop. The current is generated at room temperature, employing either an active current source or a voltage source that applies a voltage across a series resistance. In either case, Φ_ext(t) can be modeled as a linear function of physical control parameter P(t), i.e., Φ_ext(t)=kP(t). P(t) can be the output current of the current source or the output voltage of the voltage source. We consider the sampling frequency F_s and bandwidth bw as the hardware limitations. For a designed Φ_ext(t), we have a corresponding P(t). We first sample P(t) with the sampling frequency F_s and interpolate the samples with zero holdings to simulate the function of the digital-to-analog converter (DAC). We then put the interpolated pulse into a lowpass filter with bandwidth bw and obtain P̂(t). In our simulation, we use a first-order Butterworth filter. In order to convert P̂(t) to ω̂_1(t), we numerically compute the inverse function of Eq. <ref> to obtain f_1: Φ_ext→ω_1 and feed Φ̂_ext(t)=kP̂(t) as the input. Finally we follow the same simulation procedure as discussed in Section <ref> with ω̂_1(t). Fig. <ref> shows the comparison of ω_1(t) and ω̂_1(t) before and after imposing the hardware limitation for t_d=50 ns based on g̃_sl2[n] and g̃_ch2[n] as in Section <ref>, with different F_s and bw. The difference between the two pulses in Figs. <ref>d1, <ref>e1, <ref>d2, <ref>e2 are shown in Figs. <ref>a, <ref>b, <ref>c, <ref>d, respectively. As the sampling frequency and bandwidth of the hardware enhance, the distinction between the control pulses prior to and following the AWG diminishes. apsrev4-2
http://arxiv.org/abs/2407.01689v1
20240701180108
Localization beyond Dirac and Weyl fermions
[ "Adesh Singh", "Gargee Sharma" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.dis-nn", "cond-mat.quant-gas", "hep-th" ]
School of Physical Sciences, Indian Institute of Technology Mandi, Mandi 175005, India School of Physical Sciences, Indian Institute of Technology Mandi, Mandi 175005, India § ABSTRACT In condensed matter, limited symmetry constraints allow free fermionic excitations to exist beyond the conventional Weyl and Dirac electrons of high-energy physics. These excitations carry a higher pseudospin, providing a natural generalization to the Weyl fermion. How do electrons beyond the conventional Dirac and Weyl fermions localize under disorder? In this Letter, we solve the problem of localization of free fermionic excitations carrying an arbitrary pseudospin-s. We derive exact analytical expressions for fermionic wavefunctions, scattering time, renormalized velocity, Cooperon, and the magnetoconductivity. We discover that the gapless Cooperon mode solely depends on the pseudospin even when Fermi surface is composed of multiple pockets, leading to weak localization (antilocalization) behavior for even (odd) s. Remarkably, we find the localization correction to scale exponentially with s, i.e., faster moving electrons are strongly susceptible to disorder effects. This opens up intriguing possibility for Anderson localization and many-body localization in these materials. Localization beyond Dirac and Weyl fermions Gargee Sharma Last revised on July 8, 2024 =========================================== Introduction: Electrons in a periodic potential can lead to free-fermionic excitations that display striking quantum mechanical properties. A foremost example is graphene <cit.>, where the additional sublattice degree of freedom provided by the honeycomb lattice maps its low-energy theory to that of a relativistic spin s=1/2 massless Dirac electron. Since the discovery of graphene, advances in material science have made it possible to realize a wide variety of fermionic excitations in systems such as topological insulators <cit.>, Van der Waal heterostructures <cit.>, Weyl and Dirac semimetals <cit.>, topological superconductors <cit.>, and the much celebrated moiré heterostructures <cit.>. These can display a wide variety of fascinating electronic properties, such as mimicking the high-energy Weyl, Dirac and Majorana fermions <cit.>, hosting flat bands that can facilitate correlated physics <cit.>, exhibiting higher pseudospin values <cit.>, to name a few. The prospect of realizing these features in cold atomic lattices is a contemporary research theme <cit.>. In high-energy physics, the constraints imposed by Poincaré symmetry makes it impossible to realize fermions beyond s=1/2, but in condensed matter systems the constraints are lesser. Bradyln et al. <cit.> realized the possibility of finding free fermionic topological excitations in condensed matter systems that have no analogues in high-energy physics. These excitations, which are stabilized by certain symmetries, carry higher-pseudospins (s>1/2), are n-fold degenerate (n>2), and carry a nontrivial Chern number |𝒞|>1 <cit.>. Furthermore, 𝐤·𝐩 theory and a corresponding low-energy 𝐤·𝐒 Hamiltonian exists for systems belonging to certain spacegroups <cit.>. Deviation from periodicity due to disorder is experimentally inevitable. Although disorder is typically not desirable, it can lead to intriguing phenomena of solely quantum origin. In the presence of strong disorder, electrons can localize leading to an Anderson insulating phase <cit.>. Constructive wave interference in even weakly disordered solids leads to negative quantum correction to the Drude conductivity, known as weak localization (WL) <cit.>, which is a precursor to Anderson localization. Interestingly in graphene, the pseudospin generates a Berry phase that leads to a destructive wave interference, resulting in a positive quantum correction to the conductivity <cit.>. This phenomena, known as weak antilocalization (WAL), was originally proposed to occur in a spin-orbit coupled two dimensional electron gas  <cit.>, where the rotation of the physical spin causes the phase difference. Despite intensive studies on localization of Dirac and Weyl fermions <cit.>, the fate of free fermionic excitations beyond the Dirac and Weyl cases under disorder remains a highly pertinent unsolved question. In this Letter, we solve the problem of quantum interference in fermions with arbitrary pseudospin (s) dispersing linearly with momentum (ϵ_𝐤^ss'∼ s' k), where s can be either a positive integer or half integer, and -s≤ s'≤ s, increasing in steps of unity. We derive exact analytical expressions for the fermionic wavefunctions, elastic scattering time, renormalized semiclassical velocity, Cooperon, and the magnetoconductivity. We evaluate the Cooperon gaps and demonstrate that weak antilocalization occurs for half-integer pseudospins, while weak localization occurs for integer pseudospins. Remarkably, we find that the gapless Cooperon mode resulting in (anti)localization behavior depends only on the pseudospin, even when multiple bands cross the Fermi energy (for s≥ 3/2). Therefore, if the Fermi surface consists of multiple pockets, localization corrections from all such bands is qualitatively similar. We discover weak localization (antilocalization) behavior for even (odd) pseudospin (s), irrespective of the band index s'. For flat bands, we find zero quantum correction to conductivity. Remarkably, our analysis demonstrates that the localization correction scales exponentially with s, i.e., faster moving electrons are strongly susceptible to disorder effects. This insight suggests that the likelihood of encountering phenomena like Anderson localization and many-body localization is significantly increased. Our work not only generalizes the past work done in the context of Weyl and Dirac fermions <cit.> but provides crucial insights to the behavior of disordered electrons, paving way for novel explorations in the electronic properties of advanced materials. Model and formalism: Pauli spin-1/2 matrices are generalized to the following matrices that describe fermions with pseudospin s: (S_x)_αβ =1/2(δ_α, β+1+δ_α+1, β) √((s+1)(α+β-1)-αβ) (S_y)_αβ =i /2(δ_α, β+1-δ_α+1, β) √((s+1)(α+β-1)-αβ) (S_z)_αβ =(s+1-α) δ_α, β=(s+1-β) δ_α, β where 1 ≤α≤ 2 s+1, 1 ≤β≤ 2 s+1, and the pseudospin s∈ℤ^+/2. We consider a low-energy k-space Hamiltonian of the type: H^s_𝐤 = ħϑ𝐒·𝐤, where ϑ is a parameter that has dimensions of velocity, and 𝐤=(k_x,k_y), thus restricting ourselves to only two dimensions, although three-dimensional fermions are anticipated to exhibit qualitatively similar behavior <cit.>. This Hamiltonian generalizes the massless Weyl Hamiltonian and provides the low-energy theory for pseudospin-s fermions with arbitrary pseudospin. Candidate materials for s=1 and s=3/2 are presented in Ref. <cit.>. The Hamiltonian has 2s+1 eigenvalues: ϵ_𝐤/(ħϑ) = {ks, k(s-1), k(s-2),...,-ks }. When s is an integer, we obtain a dispersionless flat band (ϵ_𝐤=0), which is absent for half-integer pseudospin (Fig. <ref>). Without any loss of generality, we assume the Fermi energy to have a finite positive value (electron doping). When s≥ 3/2, multiple bands cross the Fermi energy, and we need to consider the combined effect from all such bands. Therefore, we denote the energy dispersion of the bands by ϵ^ss'_𝐤 = ħϑ s' k, where the first label in the superscript (ss') indicates the fermion pseudospin s, and the second label indicates the particular band with dispersion ħϑ s'k. The eigenfunctions corresponding to ϵ^ss'_𝐤 take the following form |𝐤ss'⟩ = 𝒩_ss'∑_m=0^2s f_m^ss' e^-imϕ, where tanϕ=k_y/k_x, f^ss'_m are the coefficients, and 𝒩_ss' is the normalization constant. The analytical expressions for f^ss'_m are provided in <cit.>. Notably, we discover that the coefficients f^ss_m have the structure of the Pascal's triangle <cit.>. We consider δ-correlated scalar non-magnetic impurities given by the impurity potential U_0(r)=∑_i u_0𝕀_2s+1× 2s+1δ(r-R_i), where the sum is over all impurity sites and u_0 is average the impurity strength. The scattering (Born) amplitude is U^ss'_𝐤𝐤'=⟨𝐤ss'| U_0(r)|𝐤'ss'⟩, and the impurity average assumes the form ⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp = nu_0^2ℱ^ss'(ϕ-ϕ'). The scattering time calculated via the Fermi's Golden rule is 1/τ_ss' = 2π/ħ N^s'_F 𝒢_ss' n_0u_0^2, where N_F^s'=E_F/2π(s'ħϑ)^2 is the density of states at the Fermi energy. The coefficients 𝒢_ss' and the functional form of ℱ^ss'(ϕ) are specified in <cit.>. We next evaluate the ladder diagram correction to the quasiclassical velocity. The corresponding equation is given by (Fig. <ref> (c)) 𝐯̃_𝐤^ss'=𝐯_𝐤^ss'+∑_𝐤^' G^ss'R_𝐤^' G^ss'A_𝐤^'⟨ U^ss'_𝐤𝐤^' U^ss'_𝐤^'𝐤⟩_imp𝐯̃_𝐤^'^ss', where 𝐯̃_𝐤^ss' and 𝐯_𝐤^ss' denote the impurity-dressed and bare velocity, respectively. G^ss'R_𝐤^' and G^ss'A_𝐤^' are retarded and advanced Green's functions, respectively, and are given by G^ss'R/A_𝐤 (ω)= 1/ω-ϵ^ss'_𝐤±iħ/2τ_ss' The ansatz 𝐯̃_𝐤^ss'= η^ss'𝐯_𝐤^ss' solves Eq. <ref>, and η^ss' is evaluated in <cit.>. The quantum interference correction to conductivity, obtained by summing the contribution of a bare Hiakmi box (σ_0^F) and two dressed Hikami boxes (σ_A^F and σ_A^R, (Fig. <ref> (a))), are <cit.> σ_0^F = -e^2 s'^2 ϑ^2 N^s'_F η_ss'^2τ_ss'^3/ħ^2∑_qΓ(q); σ_A^R=σ_A^F; σ_F^A =e^2 N^s'_F τ_ss'^3η_ss'^2ϑ^2s'^2 /4ħ^2 𝒢_ss'𝒜^ss'_1 ∑_qΓ(q), where Γ (𝐪) is the vertex ((Fig. <ref> (b))), and 𝒜^ss'_m are the coefficients of the bare vertex, defined in Eq. <ref>. As a sanity check, we recover the results for graphene: η^1/21/2=2, ℱ^1/21/2(ϕ) = cos^2(ϕ/2), σ_A^F/σ_0^F = -1/4 <cit.>. The Bethe-Salpeter equation for the vertex is given by Γ^ss'_𝐤_1, 𝐤_2= Γ_𝐤_1, 𝐤_2^ss'0+∑_𝐤Γ_𝐤_1, 𝐤^ss'0 G_𝐤^ss'RG_𝐪-𝐤^ss'AΓ^ss'_𝐤, 𝐤_2, where the bare vertex Γ^ss'0_𝐤_1,𝐤_2= ⟨ U^ss'_𝐤_1𝐤_2U^ss'_-𝐤_1𝐤_2⟩_imp is evaluated to take the following form : Γ^ss'0_𝐤_1,𝐤_2 = (ħ/2π N^s'_F 𝒢_ss'τ_ss') ∑_m=0^4s𝒜^ss'_m e^im(ϕ_1-ϕ_2). The evaluated coefficients 𝒜^ss'_m are specified in <cit.>. We assume the following ansatz for the dressed vertex: Γ^ss'_𝐤_1,𝐤_2 = (ħ/2π N^s'_F 𝒢_ss'τ_ss') ∑_m=0^4s∑_n=0^4s𝒱^ss'_mn e^i(mϕ_1-nϕ_2), which solves the Bethe-Salpeter equation Eq. <ref>. The coefficients of the matrix 𝒱^ss' are given by the solution of the following equation: 𝒱^ss' = (1-𝒜^ss'Φ^ss'𝒢_ss'^-1)^-1𝒜^ss', where <cit.> Φ^ss'_mn = ∫dϕ/2πe^i(n-m)ϕ/1+iτ_ss'ϑ s' q cosϕ = (1-Q^2/2) δ_mn -iQ/2(δ_m,n+1+δ_m,n-1) -Q^2/4(δ_m,n+2+δ_m,n-2), and Q=ϑτ_ss' s'q. The diverging elements of 𝒱^ss' give us information about the vanishing Cooperon gaps that result in localization behavior. Conductivity: The zero-field quantum interference correction to the conductivity from the gapless Cooperon mode α for the band |𝐤ss'⟩ is evaluated to be σ_ss' = -e^2/2π hY^ss'_αln(l_ϕ/l_ss')e^iαπ, where l_ϕ is the coherence length, and Y^ss'_α = η^ss'^2 s'^2/4 X^ss'_α𝒢^ss'^2(1-𝒜_1^ss'/2𝒢^ss'), l_ss'^-2 = 2/ϑ^2 τ_ss'^2, 𝒳_α^ss'= 2/𝒱_αα^ss' Q^2 Remarkably, we discover that the gapless Cooperon mode α is independent of the band index s' and only depends on the pseudospin s <cit.>. Specifically, we find α = 2s. Therefore, if multiple bands (|𝐤ss'⟩ and |𝐤ss”⟩) intersect the Fermi energy, localization corrections from all of them will be qualitatively similar. We discover that for odd (even) pseudospin, e^iαπ=-1 (e^iαπ=+1), resulting in weak antilocalization (localization) behavior. The exponential factor e^iαπ can also be identified with the Berry phase of the pseudospin, which lies at the core of localization-antilocalization behavior. Interestingly, it is the Berry phase of the pseudospin enters in the equation (Eq. <ref>) and not the Berry phase of the particular band, but since they are the identical (e^2π is=e^2π is' for a given pseudospin s, if s'≠ 0) in this model, it does not lead to any difference. Note that even though the Berry phase contribution is independent of s', Y_α^ss' depends on s', and thus the conductivity corrections for |𝐤ss'⟩ and |𝐤ss”⟩ are quantitatively different. We also predict that for flat bands (s'=0), quantum corrections vanish. With application of a magnetic field, the phase coherence is lost and the quantum correction is suppressed. This enables the experimental observation of weak localization and weak antilocalization corrections through magnetoconductivity measurements. This can be derived by quantizing the wavevector q^2→ (n+1/2)(4eB/ħ^2). In the weak-field limit, the magnetoconductivity (Δσ (B)_ss' = σ(B)_ss' - σ_ss') is given by Δσ(B)_ss' = e^2/π h Y^ss'_α[ Ψ(l_B^2/l_ϕ^2 +1/2)- ln(l_B^2/l_ϕ^2)]e^iαπ, where Ψ(x) is the digamma function. Notably, the zero-field conductivity correction (Eq. <ref>) and the magnetoconductivity crucially depend on the same prefactor Y_α^ss' that governs the magnitude of the correction. Eq. <ref>-<ref> are the main results of this paper that generalize all the existing results for the Dirac/Weyl fermion to arbitrary pseudospin-s. In Fig. <ref> we plot the magnetoconductivity for both odd and even pseudospin-s fermions limiting ourselves to s≤ 3, including all 0<s'≤ s. Both the WAL correction (for odd s) and WL correction (for even s) scale exponentially with increase in s'. On the other hand, magnetoconductivity for same s' but different s have comparable orders of magnitude (for example {s,s'} ={1/2,1/2}, {3/2,1/2} and {5/2,1/2} have a similar order of magnitude). Therefore the magnitude of the localization correction is strongly dependent on s' and not s, but since larger values of s' are only possible for larger values of s, higher pseudospins do lead to stronger localization correction. It can be argued then that for large s, perturbation theory may break down at comparatively lesser magnetic fields. Nevertheless, quantum effects will still lead to strong localization. The Drude conductivity calculated for pseudospin-s, yields a rather simple expression: σ_0^ss' = e^2/h(φ^2 s'^2/𝒢_ss' n_0u_0^2), which scales approximately with the second power of s'. We further test our theory by comparing the relative increase of the Drude conductivity and the quantum interference correction. In Fig. <ref> we plot the relative increase in magnetoconductivity |Δσ_ss|/|Δσ_1/21/2| and the relative increase in the Drude conductivity σ_0^ss/σ_0^1/21/2. While σ_0∼ s^2, Δσ_ss scales up much more drastically. Interactions: The interaction parameter r_s represents the ratio of the average inter-electron Coulomb interaction energy to the Fermi energy. The average Coulomb energy is ⟨ V ⟩∼e^2/⟨ r ⟩, where ⟨ r ⟩ = n^-1/2∼ s'/k_F is the average inter-particle separation. Therefore, r_s∼ s'^-1 indicating that electron-electron interactions are less dominant for higher pseudospins. However, as we discussed, strong localization induced by even weak or moderate disorder may interplay with interactions and lead to more surprising and exotic possibilities such as many body localization that may be explored in upcoming studies. Detailed study of electron-electron interactions for pseudospin-s fermions is reserved for future works. Summary and Outlook: Advances in material science have enabled the realization of a manifold of emergent electronic excitations, from massless Dirac and Weyl excitations to flat-bands in moiré materials. Combined with theoretical predictions of realizing materials that host higher pseudospin fermions in solids (at least up to s=2 <cit.>), these developments open up exciting possibilities for studying quantum transport such materials. We solved the fundamental problem of disorder induced quantum interference corrections leading to electron (anti)localization in fermionic excitations that carry an arbitrary pseudospin s. Deriving exact analytical expressions for the relevant quantities allows us to reveal that the gapless Cooperon modes depends exclusively on the pseudospin, resulting in in weak localization (antilocalization) behavior for even (odd) s. An astounding finding of our work is that the localization correction scales exponentially with s. We generalize existing works on localization effects in Weyl and Dirac fermions, and provide crucial insights that push forward our fundamental understanding of how disorder and interactions may interplay in these materials. § SUPPLEMENTAL MATERIAL TO `LOCALIZATION BEYOND DIRAC AND WEYL FERMIONS' § MODEL §.§ Pseudospin-s fermions Pauli spin-1/2 matrices are generalized to the following matrices that describe fermions with pseudospin s: (S_x)_αβ =1/2(δ_α, β+1+δ_α+1, β) √((s+1)(α+β-1)-αβ) (S_y)_αβ =i /2(δ_α, β+1-δ_α+1, β) √((s+1)(α+β-1)-αβ) (S_z)_αβ =(s+1-α) δ_α, β=(s+1-β) δ_α, β where 1 ≤α≤ 2 s+1, 1 ≤β≤ 2 s+1, and the pseudospin s∈ℤ^+/2. We consider a low-energy k-space Hamiltonian of the type: H^s_𝐤 = ħϑ𝐒·𝐤, where ϑ is a parameter that has dimensions of velocity. The Hamiltonian has d≡ 2s+1 eigenvalues: ϵ_𝐤/(ħϑ) = {ks, k(s-1), k(s-2),...,-ks }. When s is an integer, we obtain a dispersionless flat band (ϵ_𝐤=0), which is absent for half-integer pseudospin. Without any loss of generality, we assume the Fermi energy to have a finite positive value (electron doping). When s≥ 3/2, multiple bands cross the Fermi energy, and we need to consider the combined effect from all those bands. We denote the energy dispersion of the bands by ϵ^(ss')_𝐤 = +ħϑ s' k, where the first label in the superscript (ss') indicates the fermion pseudospin s and the second label indicates the band with dispersion ħϑ s'k. §.§.§ Generalized eigenfunctions The eigenfunctions corresponding to ϵ^ss'_𝐤 take the following form |𝐤ss'⟩ = 𝒩_ss'∑_m=0^2s f_m^ss' e^-imϕ, where tanϕ=k_y/k_x, f^ss'_m are the coefficients, and 𝒩_ss' is the normalization constant. In later sections, we provide the analytical form of f_m^ss' for a few cases. §.§.§ Impurity potential We consider δ-correlated scalar non-magnetic impurities given by the impurity potential U_0(r)=∑_i u_0𝕀_2s+1× 2s+1δ(r-R_i), where the sum is over all impurity sites and u_0 is the impurity strength, assumed to be the same at each site. The scattering (Born) amplitude is U^ss'_𝐤𝐤'=⟨𝐤ss'| U_0(r)|𝐤'ss'⟩, and the impurity assumes the form ⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp = nu_0^2ℱ^ss'(ϕ-ϕ'), where the expression for ℱ^ss'(ϕ) will be provided later. Since the energy dispersion ϵ_𝐤^ss' depends only on s', the density of states also depends only on s' and is independent of s: N^ss'(E) = 1/4π^2∫_0^∞k dk∫_0^2π dϕδ(ϵ^ss'_𝐤-E) =1/2π∫_0^∞dk k δ(ϵ^ss'_𝐤-E) =E/2π(s'ħϑ)^2≡ N^s'(E). The scattering time calculated via the Fermi's Golden rule is 1/τ_ss' =2π/ħ∑_k'⟨ U^ss'_k,k'U^ss'_k',k⟩_impδ(E_F-ϵ_k') =2π/ħN^s'_F ∫_0^2πdϕ'/2π⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp =2π/ħ N^s'_F 𝒢_ss' n_0u_0^2, where N_F^s'=E_F/2π(s'ħϑ)^2 is the density of states at the Fermi energy. The coefficient 𝒢_ss', which is obtained by the angular integration of ⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp will be specified later. §.§.§ Velocity correction Next, we evaluate the ladder diagram correction to the velocity. The corresponding equation is given by 𝐯̃_𝐤^ss'=𝐯_𝐤^ss'+∑_𝐤^' G^ss'R_𝐤^' G^ss'A_𝐤^'⟨ U^ss'_𝐤𝐤^' U^ss'_𝐤^'𝐤⟩_imp𝐯̃_𝐤^'^ss', Here G^ss'R_𝐤^' and G^ss'A_𝐤^' are retarded and advanced Green's functions respectively, given by G^ss'R/A_𝐤 (ω)= 1/ω-ϵ^ss'_𝐤±iħ/2τ_ss' The ansatz 𝐯̃_𝐤^ss'= η^ss'𝐯_𝐤^ss' is substituted in Eq. <ref> to obtain the following solution for η^ss': η_ss' = 𝒢_ss'/𝒢_ss'-ℋ_ss', where 𝒢_ss'=∫_0^2πdϕ'/2π⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp, and ℋ_ss'cosϕ=∫_0^2πdϕ'/2πcosϕ'⟨ U^ss'_𝐤𝐤' U^ss'_𝐤'𝐤⟩_imp. §.§.§ Conductivity The quantum interference correction to conductivity is obtained by the calculation of a bare Hiakmi box and two dressed Hikami boxes. The bare Hikami box at zero temperature is calculated as σ_0^F=e^2ħ/2π∑_qΓ(q)∑_k v_k^ss'x v^ss'x_q-kG_𝐤^ss'R G_𝐤^ss'AG_𝐪-𝐤^ss'R G_𝐪-𝐤^ss'A, In the small 𝐪 limit, we find σ_0^F = -e^2 s'^2 ϑ^2 N^s'_F η_ss'^2τ_ss'^3/ħ^2∑_qΓ(q) Here Γ(q) is the vertex function which depends on 𝐪 (incoming momentum) and must not be confused with the Gamma function Γ(d). Two dressed Hikami boxes denoted as σ_R^F and σ_A^F σ_R^F= e^2ħ/2π∑_qΓ(q)∑_k∑_k_1 v_k^ssx v^ssx_q-k_1G_𝐤^ssRG_𝐤_1^ssRG_𝐪-𝐤^ssR G_𝐪-𝐤_1^ssRG_𝐤^ssA G_𝐪-𝐤_1^ssA⟨ U^ss_𝐤_1,kU^ss_𝐪-𝐤_1,𝐪-𝐤⟩_imp, σ_A^F= e^2ħ/2π∑_qΓ(q)∑_k∑_k_1 v_k^ssx v^ssx_q-k_1G_𝐤^ssRG_𝐪-𝐤_1^ssRG_𝐤^ssAG_𝐤_1^ssAG_𝐪-𝐤^ssA G_𝐪-𝐤_1^ssA⟨ U^ss_k,𝐤_1U^ss_𝐪-𝐤,𝐪-𝐤_1⟩_imp, We evaluate σ_A^F=σ_R^F=e^2 N^s'_F τ_ss'^3η_ss'^2ϑ^2s'^2 /4ħ^2 𝒢_ss'𝒜^ss'_1 ∑_qΓ(q) The ratio of dressed to bare Hikami box is given by σ_A^F/σ_0^F = -𝒜_1^ss'/4𝒢_ss'. The total conductivity is given by the sum of the bare and two dressed Hikami boxes: σ^F = -e^2 N^s'_F τ_ss'^3η_ss^2ϑ^2s'^2 /4ħ^2 𝒢_ss'(1-(𝒜^ss'_1 /2𝒢_ss')) ∑_qΓ(q) §.§.§ Bethe-Salpeter equation The Bethe-Salpeter equation for the vertex is given by Γ^ss'_𝐤_1, 𝐤_2= Γ_𝐤_1, 𝐤_2^ss'0+∑_𝐤Γ_𝐤_1, 𝐤^ss'0 G_𝐤^ss'RG_𝐪-𝐤^ss'AΓ^ss'_𝐤, 𝐤_2, where the bare vertex Γ^ss'0_𝐤_1,𝐤_2= ⟨ U^ss'_𝐤_1𝐤_2U^ss'_-𝐤_1𝐤_2⟩_imp takes the following form Γ^ss'0_𝐤_1,𝐤_2 = (ħ/2π N^s'_F 𝒢_ss'τ_ss') ∑_m=0^4s𝒜^ss'_m e^im(ϕ_1-ϕ_2), We assume the following ansatz for the vertex: Γ^ss'_𝐤_1,𝐤_2 = (ħ/2π N^s'_F 𝒢_ss'τ_ss') ∑_m=0^4s∑_n=0^4s𝒱^ss'_mn e^i(mϕ_1-nϕ_2), which solves the Bethe-Salpeter equation. The coefficients of the matrix 𝒱^ss' are given by the solution of the following equation: 𝒱^ss' = (1-𝒜^ss'Φ^ss'𝒢_ss'^-1)^-1𝒜^ss', where Φ^ss'_mn = ∫dϕ/2πe^i(n-m)ϕ/1+iτ_ss'ϑ s' q cosϕ = (1-Q^2/2) δ_mn -iQ/2(δ_m,n+1+δ_m,n-1) -Q^2/4(δ_m,n+2+δ_m,n-2), and Q=ϑτ_ss' s'q. It is possible to express the diagonal elements of the matrix 𝒱^ss' as 𝒱^ss'_ii = 𝒰^ss'_ii/𝒲^ss'_ii, where 𝒰^ss'_ii = 𝒢_ss' 𝒲^ss'_ii =(-1+𝒢_ss'/𝒜^ss'_i)+( 2∑_j 𝒜^ss'_j/𝒢_ss' +∑_j<kα^(2)_ss'jk𝒜^ss'_j/𝒢_ss'𝒜^ss'_k/𝒢_ss' ∑_j<k<lα^(3)_ss'jkl𝒜^ss'_j/𝒢_ss'𝒜^ss'_k/𝒢_ss'𝒜^ss'_l/𝒢_ss' + ∑_j<k<l<mα^(4)_ss'jklm𝒜^ss'_j/𝒢_ss'𝒜^ss'_k/𝒢_ss'𝒜^ss'_l/𝒢_ss'𝒜^ss'_m/𝒢_ss'+... + β_ss'∏_j𝒜^ss'_j/𝒢_ss')Q^2/D_ss', where D_ss' = ∏_j(-1+𝒢_ss'/𝒜^ss'_i), and the coefficients α and β can be determined for specific cases. It is of interest to find the Cooperon gaps (g^ss'_α≡ 2(-1+𝒢_ss'/𝒜^ss'_η)). Vanishing Cooperon gaps result in diverging elements 𝒱^ss'_αα in the limit q→ 0. §.§ The case s=s' Our focus here is the topmost conduction band with energy dispersion ϵ_𝐤 = ħϑ s k. In this case, it is possible to analytically find out the various coefficients introduced earlier. f^ss_m=(Γ(2s+1)/Γ(m+1)Γ(2s+1-m))^1/2 𝒩_ss= 1/√(2^2s) ℱ^ss(ϕ)= cos^4s(ϕ/2) 𝒢_ss= Γ(2s+1/2)/√(π)Γ(2s+1) ℋ_ss= Γ(2s+1/2)/√(π)(Γ(2s)+2sΓ(2s)) η^ss = 2s+1 𝒜^ss_0≤ m≤ 2s= Γ(2s+1/2)/√(π)(∏_k=0^k=2s-m-14s-m-k) m! 𝒜^ss_2s≤ m≤ 4s =𝒜^ss_4s-m, where Γ(x) is the Gamma-function. Remarkably, the wavefunction coefficients f_m^ss have the mathematical structure of square-root of the Pascal's triangle as shown in Fig. <ref>. Furthermore, the following condition guarantees that the Cooperon gap vanishes: g_α^ss=𝒢_ss/𝒜^ss_α=1. The above condition is satisfied for α=2s. Therefore in the limit q→ 0, the vertex correction is dominated by the following term: Γ^ss'_𝐤_1𝐤_2∼1/q^2e^2is(ϕ_1-ϕ_2). When ϕ_1-ϕ_2≈π, the vertex carries a positive (negative) sign for integer (half-integer) values of s. This implies weak-localization for integer s and weak-antilocalization for half-integer s. Furthermore, we recover the known results for graphene: η^1/21/2=2 ℱ^1/21/2(ϕ) = cos^2(ϕ/2) σ_A^F/σ_0^F = -𝒜_1^1/21/2/4𝒢_1/21/2 = -1/4. §.§ The case s'≠ s When s'≠ s, finding generalized analytical expressions is a cumbersome task. We explicitly evaluate the coefficients for the first few cases (s≤ 7/2). Table <ref>, <ref>, <ref>, <ref> and present the values of the coefficients 𝒢^ss' and ℋ^ss', respectively. The velocity correction coefficients is presented in Table <ref> and  <ref>. Interestingly, we find that the flat bands in the even s case have η^ss'=0, implying zero velocity correction. The bare Cooperon coefficients A^ss' are presented in Table <ref> and Table <ref> for the odd s and even s cases, respectively. We find that the value α for which the Cooperon gap g^ss'_α vanishes is independent of s'. Therefore, when multiple bands cross the Fermi energy (for s≥ 3/2), each band results in the same qualitative behavior, i.e., localization for even s and antilocalization for odd s. §.§ Conductivity The zero-field conductivity from |𝐤ss'⟩ is finally evaluated to be σ_ss' = -∑_αe^2/π h Y^ss'_α∫d(q^2) 1/l_ss'α^-2+q^2, where Y^ss'_α = η^ss'^2 s'^2/4 X^ss'_α𝒢^ss'^2(1-𝒜_1^ss'/2𝒢^ss'), l_ss'α^-2 = g^ss'_α/2X^ss'_αl_ss'^2, l_ss'^2- = 2/ϑ^2 τ_ss'^2 and the vertex Γ_𝐪^ss' is expressed as Γ_𝐪^ss' = ħ/2π N_F^s'τ_ss'∑_α2/g^ss'_α^2 + X^ss'_α Q^2 e^iαπ The vanishing Cooperon gaps will result in the most dominant contribution to the conductivity. The values of α such that g_α^ss'=0, and the corresponding X^ss'_α are presented in Table <ref>. In the weak-field limit, the magnetoconductivity is given by Δσ(B)_ss' = e^2/π h∑_α Y^ss'_α[ Ψ(l_B^2/l_ϕ^2 + l_B^2/l_ss'α^2+1/2)- log(l_B^2/l_ϕ^2 + l_B^2/l_ss'α^2)], where Ψ(x) is the digamma function, l_ϕ is the coherence length, and l_B = √(ħ/4eB) is the magnetic length of a Cooperon.
http://arxiv.org/abs/2407.02606v1
20240702184605
An AI-Based System Utilizing IoT-Enabled Ambient Sensors and LLMs for Complex Activity Tracking
[ "Yuan Sun", "Jorge Ortiz" ]
cs.HC
[ "cs.HC" ]
WINLAB, Rutgers University Piscataway, NJ, USA ys820@soe.rutgers.edu WINLAB, Rutgers University Piscataway, NJ, USA jorge.ortiz@rutgers.edu § ABSTRACT Complex activity recognition plays an important role in elderly care assistance. However, the reasoning ability of edge devices is constrained by the classic machine learning model capacity. In this paper, we present a non-invasive ambient sensing system that can detect multiple activities and apply large language models (LLMs) to reason the activity sequences. This method effectively combines edge devices and LLMs to help elderly people in their daily activities, such as reminding them to take pills or handling emergencies like falls. The LLM-based edge device can also serve as an interface to interact with elderly people, especially with memory issue, assisting them in their daily lives. By deploying such a system, we believe that the smart sensing system can improve the quality of life for older people and provide more efficient protection. An AI-Based System Utilizing IoT-Enabled Ambient Sensors and LLMs for Complex Activity Tracking Jorge Ortiz July 8, 2024 =================================================================================================== § INTRODUCTION Non-intrusive sensors are crucial for modern sensing applications, particularly in fields requiring continuous and unobtrusive monitoring, such as elderly care<cit.>. These sensors, which do not rely on cameras, offer significant advantages, including enhanced comfort and privacy for users. By seamlessly integrating into the environment without requiring direct interaction or visible placement, they minimize aesthetic and social intrusiveness<cit.>. Additionally, non-intrusive sensors reduce deployment and maintenance efforts, leveraging existing infrastructure and eliminating the need for frequent battery replacements<cit.>. This ease of installation and low maintenance makes them practical for long-term use. Furthermore, they provide flexibility and scalability in sensing applications, capturing a broad range of environmental data indirectly, thus enabling comprehensive monitoring across diverse contexts. In this work, we build a non-intrusive smart sensing system that relies on the reasoning ability of large language models (LLMs) to assist in elderly care. The system can detect complex activities composed of more than two atomic activities. An atomic activity refers to unit-level activities that can be captured by sensors within a short time window and cannot be broken down further. Besides detecting normal atomic activities, we also use LLMs for high-level explanations and reasoning. To adapt to the memory constraints of our edge devices, we employ a local inference model on the IoT devices before sending the data to the LLM. This approach reduces the transmission burden via the wireless network, ensuring efficient and real-time processing. § LLM AND COMPLEX REASONING Our system first collects non-intrusive sensor data from ambient sensors. These sensors detect atomic-level activities. Due to the limited memory capacity of the sensors, a small model runs locally to perform initial inferences. Deploying a small model locally allows for real-time processing, reducing latency and ensuring immediate response to critical activities. Additionally, local processing minimizes the amount of data transmitted to the cloud, enhancing privacy and reducing bandwidth usage. Once an atomic activity is detected, it is sent to a cloud server where an LLM is deployed. The LLM applies its reasoning and interaction capabilities with the user. For instance, if the sequence of activities indicates eating and drinking without sanitizing, the system can remind the user to wash their hands. Another example is if the user drinks water but forgets to take their pills; the LLM will interact with the user to remind them to take their medication. Additionally, if a user is detected as getting dressed but skipping a critical item like a coat on a cold day, the LLM can prompt them to wear it, ensuring their well-being. Furthermore, in the event of a medical emergency, the system can recognize distress signals or calls for help, and promptly alert emergency services, providing vital assistance when needed. § SENSOR BOARD SETUP Our goal is to construct a device that enhances the generalizability of sensor data explanation while safeguarding user privacy. The device (figure  <ref>) , installed in the environment, utilizes non-invasive sensors to detect human activities. These sensors include PIR (motion), IMU (accelerometers), audio, RGB, pressure, humidity, magnetometer, gas, and temperature sensors. They are widely available and well-suited for generalizing smart space sensing capabilities. Smart space sensors produce two types of outputs: binary and continuous. Binary sensors, like PIR motion sensors, provide "on" and "off" states. Although useful for indicating environmental status, they may be inadequate for analyzing complex activities  <cit.>. Our device uses PIR sensors for motion detection, transmitting "ON" and "OFF" signals. In contrast, accelerometers offer continuous outputs, tracking movement along the X, Y, and Z axes. We utilize them to detect environmental vibrations, such as different vibration frequencies for pouring water versus running <cit.>. The RGB sensor detects colors within a specific range, identifying the presence of colors based on the activated channels. Pressure sensors measure barometric pressure, providing insights into weather patterns and indoor air quality, especially in HVAC operations. Magnetometers detect magnetic fields, but readings can be influenced by nearby electronic devices and human activity due to RF electromagnetic fields <cit.>. Gas sensors detect substances like ethanol, alcohol, and carbon monoxide, useful for monitoring air quality changes due to activities such as heavy running. The Raspberry Pi Model B+ (Figure <ref>) serves as the central processing unit, interfacing with a custom sensor board via GPIO pins. The sensor board, equipped with various ambient sensors, collects data such as temperature, humidity, and motion. The Raspberry Pi, powered by a Broadcom BCM2837B0 quad-core ARM Cortex-A53 64-bit SoC at 1.4GHz with 1GB LPDDR2 SDRAM, runs a Python script to read sensor values at regular intervals, process the data, and store it for further analysis. This setup, featuring dual-band 802.11ac wireless LAN, Bluetooth 4.2, Gigabit Ethernet, and multiple USB ports, enables real-time monitoring and data collection essential for elderly care assistance and activity tracking. Although our current sensor setup is insufficient to run a model for data embedding locally, we plan to implement local atomic activity detection in the next stage of this work. § INITIAL EXPERIMENT In our initial experiment, we will use the ambient sensor to collect atomic activities and test the LLM model on the cloud. The target is to determine if it possesses the reasoning capabilities that can assist in the daily lives of elderly individuals. §.§ Data Collection The atomic level activities we collect include 20 activities: eat, paperdis, write, chop, hand wash, pour water, clean floor, knock, run, curtain, light switch, type, door pass, wipe desk, chat, basketball, saw, shave, wash dish, and brusing teeth. The sampling frequency is 90Hz for each sensor channel. There are discussions about whether the high-fidelity audio sensor channel <cit.> is the main channel for detecting patterns. Audio data can also be intrusive <cit.>. To avoid privacy issues associated with the audio channel, its frequency is decreased from 16kHz to 90kHz. According to Shannon’s sampling theorem, information contained in frequencies greater than half the sampling rate cannot be recovered <cit.>. Figure <ref> visualizes the sensor data collected during eating activities. Figure <ref> visualizes the data collected during chopping activities. §.§ Initial Results We developed a data encoder to process the sensor data before sending it to the LLM. Our model primarily uses a channel-wise MLP to extract features from each channel, followed by a sensor fusion module also based on MLP. Additionally, we incorporated a fast Fourier convolution module <cit.> to improve the accuracy rate. Table <ref> presents the initial results of F1 scores, precision, and recall for detecting various activities using the model. Most activities, such as 'eat', 'write', 'chop', 'hand wash', 'clean floor', 'knock', 'run', 'curtain', 'light switch', 'type', 'door pass', 'wipe desk', 'basketball', 'saw', 'shave', 'wash dish', and 'teeth', achieve perfect scores of 1.00 across all metrics. However, 'paperdis', 'pour water', and 'chat' show lower performance with F1 scores of 0.43, 0.48, and 0.32, respectively. These variations indicate that while the model performs exceptionally well for most activities, there is room for improvement in detecting certain activities. We also demonstrate the robustness of our model when the noise level <ref> and frequency<ref> of the data vary. §.§ Applying LLMs to Sequence Detection and Complex Activity Recognition After detecting atomic activities, we send the sequence of detected activities to the LLM. For example, to remind elderly people to take medication before eating, we send a sequence of activities such as "brushing teeth → hand wash → pour water → eat". The corrected sequence from the LLM is "brushing teeth → hand wash → take medication → pour water → eat", recognizing the complex activity as "forgetting medication". If the original order detected is "eat → basketball → brush teeth", the LLM will suggest "brush teeth → wash hands → eat → basketball" to ensure proper hygiene before eating, recognizing the complex activity as "unhygienic behavior". We use GPT-4 to implement sequence verification and reminders. For instance, if the input sequence is "door pass → using paper dispenser", the LLM suggests "door pass → turn the switch → paper dispenser" to remind the user to turn on the light, identifying the complex activity as "preventing slipping". The order rules can be configured in the prompt to meet detailed requirements. Our current results demonstrate an implementable framework that can help elderly people, especially those with memory issues, to live a high-quality life. § CONCLUSION AND FUTURE WORK In this work, we designed a framework leveraging IoT and LLMs to assist elderly people and enhance their quality of life. We employed a board with non-intrusive sensors to avoid discomfort for the elderly while monitoring their behavior to provide assistance. We implemented a low-frequency sensor sampling strategy, particularly for the microphone channel, using an unrecoverable sampling rate that cannot reproduce the original sound information. A smart IoT box was built to collect initial data for experimentation. Our initial results demonstrate that our model can successfully identify daily activities from the sensor data. We further utilized the GPT-4 interface to test the reasoning and complex activity detection capabilities, showcasing a promising framework that can assist elderly individuals. However, the current smart sensor setup uses an older version of the Raspberry Pi. We plan to use a higher version of the Raspberry Pi to enable real-time inference on the edge device. Additionally, we will conduct user studies with more participants to interact with our system, helping us evaluate the design and implementation of the system.
http://arxiv.org/abs/2407.01929v1
20240702034555
What We Talk About When We Talk About LMs: Implicit Paradigm Shifts and the Ship of Language Models
[ "Shengqi Zhu", "Jeffrey M. Rzeszotarski" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Study of Anisotropic Compact Stars in f(ℛ,𝒯,ℛ_χξ𝒯^χξ) Gravity M. Sharif msharif.math@pu.edu.pk  and T. Naseer tayyabnaseer48@yahoo.com; tayyab.naseer@math.uol.edu.pk Department of Mathematics, University of the Punjab, Quaid-i-Azam Campus, Lahore-54590, Pakistan. =============================================================================================================================================================================================================== § ABSTRACT The term Language Models (LMs), as a time-specific collection of models of interest, is constantly reinvented, with its referents updated much like the Ship of Theseus replaces its parts but remains the same ship in essence. In this paper, we investigate this Ship of Language Models problem, wherein scientific evolution takes the form of continuous, implicit retrofits of key existing terms. We seek to initiate a novel perspective of scientific progress, in addition to the more well-studied emergence of new terms. To this end, we construct the data infrastructure based on recent NLP publications. Then, we perform a series of text-based analyses toward a detailed, quantitative understanding of the use of Language Models as a term of art. Our work highlights how systems and theories influence each other in scientific discourse, and we call for attention to the transformation of this Ship that we all are contributing to. § INTRODUCTION Scientific publications expand exponentially, with the size of literature doubling every ∼17 years <cit.>. The field of CL/NLP is no exception; in fact, the doubling only took 5 years: As of 2023, the number of papers documented in the ACL Anthology is twice as much as the total by 2018 <cit.>. With the explosion of new publications, it is imperative but also increasingly challenging to sort out the major contexts, progress, and future directions of the field. Researchers have sought to identify emergent key terms and factors that led to disruptive shifts of paradigms (). In this period of flux, however, an ever-evolving field like ours calls for deeper analysis beyond identifying these elements, in order to understand various quantitative questions regarding these rapid and disruptive shifts. For instance, to what extent is the field transformed by any certain model, like ChatGPT <cit.>? How does the popularity of the latest GPT models compare with, say, that of BERT <cit.> in 2020? From there, we might even be curious about bigger questions like, “How unprecedented really is ChatGPT?”, where our empirical guesses can diverge drastically without sufficient quantitative evidence. While readers of this paper likely come in with a tacit understanding of the ebb and flow of the field, it is hard to nail down such factors that keep changing in publications. More fundamentally, the narrative describing scientific progress as the emergence of new elements does not cover the more implicit paradigm shifts, which features the evolution instead of invention of terms. The (forms of) key terms may continue to be broadly used, but are gradually overwritten with new meanings in new contexts. Language Models (LMs), as a term of art, refers to no single, static thing. It is used referentially to index a collection of models deemed relevant and representative at the time or in the context of a paper. As this is ever-changing, we are faced with a Ship of Theseus scenario <cit.>, wherein the same terminology is essentially re-invented and its referents are perhaps entirely replaced. As such, a subtle gap between the durable collective term of “LMs” and the time-specific referent models of the moment is widening as the field progresses, threatening its stability and accessibility to new researchers. These issues call for new analyses of the subtle transformations that result from these paradigm shifts. In this paper, we seek a quantitative description of a field's continuous evolution. More specifically, we inquire into the Ship of LMs paradox, i.e., the aforementioned reconstruction of the term Language Models. We decipher this evergreen term's rapidly changing referents, contexts, and usages across time. We develop a semi-automatic, generalizable framework to extract and organize two closely related sets of keywords: (1) mentions of the collective LM concept, and (2) specific model names, and construct a dataset of 7,650 papers from the 10 most recent, major NLP conferences. We focus on several questions concerning how we as a field talk about LMs: How often do we talk about LMs, and how confidently? (<ref>) Which models, and what is special about these components? (<ref>) Moreover, how do the referents of LM vary across papers? (<ref>) Finally, we conclude the findings and future perspectives in <ref>. Our work highlights the astonishing extent of change subtly encoded in the seemingly unchanged overarching terms. We hope the Ship of LMs can serve as a new perspective to understand the field's progress, and that our methodology can serve as an entry point for additional finer-grained measurements of subtle changes in rapidly growing fields. § RELATED WORK Diachronic Analysis of the Progress in NLP Various studies review the history of NLP conferences and the ACL Anthology <cit.>, as well as the community that contributed to the field's trajectory <cit.>. Other works identified the field's transition points and themes. <cit.> proposes an automatic framework to extract key entities (tasks, dataset, etc.). <cit.> explores a similar goal via topic modeling, and <cit.> further identified such entities that causally shaped the field's important stages. More recently, there has also been a specific focus on the changes brought by LLMs <cit.> and the impact on the related communities <cit.>. Aside from text-based analysis, interviews and surveys (, etc.) have also provided valuable qualitative insights for the disruptive shifts. Paradigm Shifts and Scientific Trends have also been core topics in the broader Science of Science field <cit.> beyond CL/NLP. The existing literature mostly centers on the emergence of new, trending ideas as well as their dynamics across the author networks. For instance, <cit.> identified text snippets that are largely cited by future works, coined scientific memes, on citation graphs; <cit.> explored the diffusion process of new ideas under various social factors; and <cit.> measured the relation between the speed of producing new ideas and the size of a field. Citation/Author networks have also been introduced by recent works <cit.> as a method for the more specific background of the NLP field. Our work provides a complement to these ongoing threads. As discussed, we raise a novel scenario about the transitions within a lasting concept (Ship of LMs), which to our knowledge has not been explored. We examine the use of such terms as LMs, providing quantitative interpretations of how (and how much) our beliefs and common grounds have evolved. In some sense, our work can be also seen as a meta-analysis of the various works studying certain elements (e.g. “the era of LLM”, “stages of statistical Machine Translation”, “ChatGPT's impact”, etc.) We integrate these valuable findings to highlight a new question about the procedures: how exactly did we forge these of key elements into practice and eventually to our norms of language? § METHODS §.§ Dataset Construction Following common practice in prior work <cit.>, we utilize the official ACL Anthology as our data source. We collect papers accepted to the main Proceedings of three major NLP conferences (ACL, EMNLP, NAACL) held annually[NAACL is merged with ACL once every three years, and thus there is no NAACL conference data in 2020 or 2023.]. We first interact with the API to fetch metadata (e.g., Anthology ID, title, and abstract). Based on the index of a conference, we obtain the paper PDFs from the formatted Anthology URLs, and scan the text with the [https://pypdf.readthedocs.io/] tools. For post-processing, we remove excessive formatting (e.g. conference names in the footers) and identify section titles with regular expressions. The resulting dataset contains in total 7,650 papers from 10 conferences sequentially from ACL 2020 to EMNLP 2023. Our analyses focus on this most recent 4-year window where the advances regarding LMs have been especially pronounced, while our methodologies can similarly extend to a broader range. Default Setup We extract the body text by cutting off before the References section. This is marked as our default setup, and experiments are based on the default unless otherwise noted. §.§ Retrieving the Mentions of LMs To investigate the Ship of LMs problem, we start by extracting and analyzing keywords and relevant entities, a common backbone method for analysis <cit.>. For a sample paper to be related to LMs, the writing could utilize two types of mentions: (1) the collective concept of “language models”, implying the context as a generalizable discussion, and (2) the names of specific models, indicating what models are exactly considered in a limited scope. Our goal is to maintain two keyword sets that correspond to the two types. Thus, we can resolve the referents of the generic language model mentions to the specific models used, by locating, linking, and comparing the keywords from both sets. §.§.§ Notations As described, we seek to build two related collections of key entities, one marking the mentions of LMs as a general term, and the other marking specific model names. The two are respectively denoted by ℒ (from LMs) and ℳ (from Models). In practice, ℒ converges to a small, well-recognized set of terms. We define ℒ = {language model, LLM, PLM} since “language model” is the substring of most of its subcategories, e.g., “large language models”, “Korean language models”, or “language modeling”, and searching for “language model” covers all such variations. We also include the most common acronyms, “LLM” (Large Language Models) and “PLM” (Pre-trained Language Models). The construction of ℳ is elaborated in <ref>. We use m to represent a specific element from ℳ (e.g., m = BERT). For a span of text, s, we have 𝐌(s) representing the subset of elements from ℳ that indeed appear in s.[We can similarly define l and L(·) based on ℒ. In practice, however, we don't further distinguish between different LM terms, given the limited size and high convergence of ℒ.] While 𝐌(·) is a function of the input text by definition, we omit the input when s is the entire body (default setup) for simplicity, and write the subset as 𝐌. The omission applies similarly to the notations below. To initiate our study on any individual paper and any model(s) of interest, we introduce a family of counting functions. Given a model name m, we define N_m(·) as the count of how many times m appears in the input text. The counting functions also apply to sets of model names: For a set of models M = {m_1, m_2, ..., m_k}, we have N_M = ∑_i=1^k N_m_i Thus, 𝐌 can now be formally defined as 𝐌 = {m | N_m > 0, ∀ m ∈ℳ} Additionally, given its importance, we mark the count of all model names in a paper as N N_ℳ = N_𝐌 Similarly for the other keyword set of general LM mentions, we denote the total count of all elements in ℒ as N^ℒ. We mark ℒ as superscript for an explicit distinction with the N_m family. The total counts N^ℒ, N, and the N_m family serve as essential cornerstones of our approach since they are direct indicators of how LMs are discussed and resolved. These patterns from independent works become the changing constituents of the Ship. §.§.§ Constructing M from the text To construct a comprehensive dictionary of specific model names, we established a human-AI workflow to extract and register model names at scale. We designed a detailed in-context prompt for a state-of-the-art LLM to detect model names from the title and abstract of papers. All detected names from the full dataset are collected and ranked by frequencies as candidates [m̂_1, m̂_2, ...]. Since the same type of model as referent can have various textual forms, we aim to maintain and distinguish two attributes (as lists) for a model m: * Aliases: Different text patterns that all refer to m; e.g., both “chatgpt” and “ChatGPT” are identified separately but point to the same thing, and we need to count them together. * Variations: Refers to m, but is the extension of an existing alias (i.e. having an alias as substring). This usually suggests a specific variation of m, e.g., “T5-3B” when m = “T5” <cit.>. Searching for “T5” in the text would have already included the mentions of “T5-3B”. To compose the final list, we sorted names following a simple heuristic (illustrated in Figure <ref>): * When we encounter a new model m not in ℳ, we add m to ℳ and initialize its alias list as [m]; * Additional aliases of an entry m_j are appended to its list; * Variations of existing entries are recorded but not added as an alias; * Candidates that are not the name of a model (e.g., BLEU) are discarded. For each entry of m, we also manually retrieved the original paper or documentation to determine if there is an explicit dependency on another model. In all, ℳ has a total of 98 model entries and 146 aliases. With the two keyword lists, ℒ and ℳ, we are ready to examine how LMs are resolved and extract diachronic patterns. Our dataset and code will be open-sourced, and we provide the full list of models involved and the LLM prompts in Appendix <ref> and <ref>. § EXPERIMENTS AND FINDINGS LMs have been steadily gaining more attention from the field. <cit.> reports that papers containing the key phrase “Language Model” have increased from less than 400 pre-2019 to around 10,000 in 2023. We observe a similar trend with a finer-grained search and the focus on main conference publications in the NLP domain (Figure <ref>(a)). At ACL 2020, 35% of the papers contain at least one LM mention from ℒ (we refer to this portion as LM-related papers). Since then, this proportion has had a smooth, continuous growth of approximately 5% (additive) per conference, hitting 84% just three years later at EMNLP 2023. §.§ Wind in the Sails: Surging Mentions, Speeding Conclusions We begin by querying a fundamental aspect of LMs' increasing popularity: Has our use of the term LM also evolved per se, apart from the background increase noted above? As one hypothesis, LMs' popularity might be attributed mainly to the increase of share. The types of work we do and the context of LMs may have not changed significantly – it's just more authors working on the topic, more resources put into it, or other external factors. We consider the average N^ℒ of all papers at a conference (N̅^̅ℒ̅). If the case above is true, N̅^̅ℒ̅ would remain generally unchanged in both the LM-related and non-related groups. Thus, the ratio of LM-related papers determines N̅^̅ℒ̅, and we can draw an estimate from the scale of the first data point (N̅^̅ℒ̅=4.29 for 35% at ACL 2020). For instance, 54% of papers at EMNLP 2021 are LM-related, which is 1.54× that of ACL 2020 (35%). We scale N̅^̅ℒ̅ with the same ratio, 1.54 × 4.29 ≈ 6.56, as the estimate for EMNLP 2021. Fig. <ref>(b) compares the actual N̅^̅ℒ̅ and the value estimated in this way. This turns out to be a surprisingly good fit for the first half of the data. Within the first 5 conferences, the deviations between estimated and actual value are consistently less than 10% and often close to 0, e.g., we estimated 6.56 for EMNLP 2021 where the actual value is 6.59. For this period, LMs gained more attention as a topic in this period, but language describing this term remained similar. Metaphorically, the composition of the Ship remains the same, but it has more wind in its sails. However, we see a strong deviation from the estimated growth starting 2022. N̅^̅ℒ̅ has since been on a super-exponential growth, eventually being 80% higher than estimated at ACL 2023 and 168% higher at EMNLP 2023 (where N̅^̅ℒ̅ nearly doubled in just half a year). In other words, the Ship is not just sailing better (more papers), but it is also undergoing reconstruction (referents of LM are changing). The distinct patterns pre- and post-2022 despite a similar background increase highlight the necessity to study the Ship of LMs as a dynamic concept, as emergence of a term is not sufficient for mining the deeper nuances as such. What about the actual models we use? The super-linear increase of N^ℒ demands investigation into its likely causes. Authors might seek to cover more models in more detail, and their writing adapts to the strengthened claims, leading to the growth observed. Alternatively, authors might be more eager to employ trending terms even without significantly stronger evidence or fit to their work. To this end, we compare how the distributions of N^ℒ and N change over time in Figure <ref>. Each row represents a conference and the columns list all conferences which occur after it. Grid cells are pairs of conferences in comparison. We apply a Kolmogorov-Smirnov test <cit.> to each pair to determine if there is a significant difference in their distributions. We also annotate their signed mean difference, where a positive number indicates an increased mean value from the row-conference to the succeeding column-conference. The grids are colored based on test significance level and sign of mean difference (note that all colors other than the lightest correspond to p<0.05). First, we see evidence supporting our prior observations on the patterns of N^ℒ. Earlier conferences form a cluster where no significant difference is noted; yet, starting 2022, every conference has a significantly higher N^ℒ than most or even all of its predecessors. However, the distributions of N tell a distinct story. For most pairs, there is little or no evidence for a difference in distribution. There also isn't a similar line that divides the earlier and most recent conferences. For instance, the distribution of N for EMNLP in 2023 does not have a significant difference with that in 2020, despite all its specialties. In fact, we even see an opposite case: conferences in 2022 and 2023 – the exact time of the super-linear boosts of N̅^̅ℒ̅ – contain significantly fewer model mentions than before. In other words, we arrive at the conclusions faster: the information conveyed via specific models has not increased, but more is drawn about LMs collectively. Possible reasons include more focus on the whole concept of the Ship rather than maintaining its parts, or on specific planks than the wholesale renovation of the vessel; more efforts would be needed to understand the exact cause. §.§ Oak, Pine, or Cedar Planks: Which models are we talking about? With the exploding usage of the term LMs comes wider variation in the use cases and context around them. To go deeper, we must consider what writers refer to when they include LM in a paper (i.e., what the Ship is like at a certain point and how do they [re]build the ship, and not just whether it sails.) Based on all individual N_m and the hierarchy of components, we obtain the exact number of the appearances of each model by matching their aliases in the text. Thus, we put together the collective compositions for each conference and visualize them as Sunburst charts, where the component sizes correspond to their share. We show a representative comparison of EMNLP 2020 and 2023 in Figure <ref>, and display full results in Appendix <ref>. In 2020, the BERT model alone makes up 41% of N, and 55% with its dependents (We refer to a model and all its dependents as a component to distinguish a group/family of models from the root model itself.) Other significant components include RNN (20%), CNN (6%), and GPT (5%). As for 2023, the GPT component (30%) takes the lead with the advent of the notable GPT-3 models (which formed 71% of all GPT mentions). BERT models are still the 2nd largest component despite a reduction to 25%. The results seems to suggest a less unipolar composition; in fact, the share of the BERT component in 2020 is comparable to the top two in 2023 combined. We also notice the rise of more recent components, including T5 (12%) and LLaMA (7%), while RNN (20%→2%) and CNN (6%→1%) saw the most significant decreases. How much remains as the replacement of earlier components goes on? We calculate Jaccard similarity to quantify how much is shared between the composition of conferences, shown in Figure <ref>. We observe that Jaccard similarity between conferences monotonically decreases for subsequent conferences, which matches the Ship of LMs case where constituent parts change over time. For two consecutive conferences, the Jaccard similarity is usually only 71% to 85%; the index quickly drops to 45% to 57% with an interval of just two years, and to 24% to 31% in three years. We also note that dissimilarity is rapidly increasing, with EMNLP 2023 sharing a 52% Jaccard similarity with ACL 2023 just half a year ago, 42% with EMNLP in 2022, and ≤38% with all other predecessors. With the representing models thoroughly reshuffled in as short as <5 years, the “shelf life” of our conclusions and knowledge of LM has seen a new low, thus bringing unprecedented challenges for long-term tasks and literature studies. One dominant model or many contributors? We have seen the presence of major component models so far, and readers likely have their own tacit understandings of the “giants” in the field at the moment. Here, we emphasize the vast implications of the dominant referents of LMs. For example, if the supposedly abstract and inclusive concept of LM is implicitly equated with a certain model, we might be assigning the random, quirky traits of the model to the concept of LMs as a whole. This could unwittingly hinder the diversity, generalizability, and future usefulness of work despite a general veneer of neutrality among papers. To portray how the giants shape our reported findings, we drill down to investigate their presence in individual papers. We examine the existence of an absolute majority model component in each paper that appears more than all other components combined, i.e., occupying more than N/2. One scenario, then, would be that a single or small set of giants actually underpin the notion of LMs in papers. On the other hand, if LM is truly a general term of art, then we might also see some but not most papers dominated by a model. Figure <ref> displays the proportion of publications with absolute majority components for the full data (left) and the top 25% with the highest N^ℒ (right). The most notable components are marked with the same color as in Fig. <ref>. Other models are shown collectively as the grey bar (“others”), and the proportion with no single majority model is denoted with a striped pattern. We observe that around 80% of papers contain an absolute majority model. Specifically, we get a glance at the astounding traction of BERT before more recent paradigm shifts: it dominated up to 60.5% of all papers and 68.6% of the most LM-centered ones. Interestingly, more focus on the collective LM terms did not entail a more balanced composition. In fact, they are often more biased: The percentage of papers with an absolute majority is higher in the most LM-centered quartile than overall for all 7 conferences before EMNLP 2022. There has also been a fresh wind, however. In the most recent 3 conferences, we see fewer dominant components when a paper focuses heavily on the generalized LM terms. More importantly, the chance of an absolute majority in both groups has been on a continuing decline and both reached an unprecedentedly lower level: 70.5% for all papers and 67% for the top quarter. We call to keep monitoring the heterogeneity (or lack thereof) in LM papers given the (still) high presence of major components and a visibly surging presence of the GPT component at the most recent EMNLP 2023. §.§ Lembos or Trireme: Factoring in context The extent to which a paper focuses on LMs implies different use scenarios. For instance, a work may utilize and mention them for data processing but doesn't concern the science of LMs per se. This usually implies lower N^ℒ in contrast to another work that studies an emergent property of LLMs. As we develop our understanding of how LMs are embodied in papers, we need to consider how the composition of papers factors into their usage. We rank the LM-related papers at a conference by N^ℒ and compare the ones with the most and least focus on the LM terms. Specifically, we extract the top and bottom quarters, denoted as Q4^+ and Q1^- respectively. Figure <ref> compares compositions of N from Q4^+ and Q1^- of EMNLP 2023 and shows the differences in the 10 largest components. In Q4^+, the GPT component covers an additional 13.2% of N compared with Q4^+, followed by LLaMA (6.5%) and T5 (5.0%). Meanwhile, the BERT component takes as much as an extra 23.6% in Q1^-, where the roles of the LMs terms are most peripheral. We further measure how the same group changes over time, comparing model compositions at an early conference (EMNLP 2020) versus a most recent one (EMNLP 2023). We depict the change of 15 largest components in Figure <ref>. The findings are consistent: the most LM-centered group adapts thoroughly to the latest models, while Q1^- sees a much more modest change, or even increased interest for models pre-2020 like BART <cit.> and COMET <cit.>. Most notable is the contrast for BERT: For Q4^+, the once-dominant component loses as much as 37% of N, going from 55.8% to 19%. However, in Q1^-, the BERT component remains almost untouched; it still firmly takes up about 42% of N despite all the new models. This demonstrates that the trend in less LM-centered groups is not merely a moderated or delayed version of that in the top ones, but indeed represents a distinct interpretation and resolution of LMs. The findings converge to an interesting division: the newest components or those with the latest major models (GPT-3, ChatGPT, LlaMA, and Flan-T5, inter alia; all of which are post-2022) instantly enter the most LM-centered discourse, while the least LM-centered ones persist to favor certain earlier models for a longer period. Thus, we should be aware of the chasm between the co-occurring yet distinct contexts that eventually map to different constructions of the same Ship. For instance, a novel property found with 2023 models as default could be problematic if communicated to a BERT-centered subfield without regard, and further used to justify the use of LMs in a new stake despite wildly different ship drafts and capabilities. In return, the context difference may further hinder communications between groups representing the varied use and interpretations of LMs. § CONCLUDING REMARKS In this work, we go over the past and present of the enduring term of Language Model(s), based on an original dataset from the latest major conferences. We sort out the subtle, continuous shifts in the practical meaning of LMs, and witness how the retrofits eventually accumulate to a brand new Ship of Language Models. We checked up the accelerated transmutation of our use of the term beyond the “routine updates” of the Ship, whereas actual referents did not sync with the pace. We quantify and visualize the drastic change in the planks and timber, emphasizing the shortened period of reconstruction and the presence of dominant components. Finally, we highlight the snowballing context difference between the LM-centered research and the more peripheral applications and its consequences. Our work seeks to delineate the shape of the drifting concepts in the tide of time; yet, snapshots at the galas still do not seem to catch up with the differentials as much originally as a decade's but now within months or weeks. We have in fact seen more interesting signs, standing at another epochal crossroads of the ChatGPT era: the unprecedented dissimilarity of compositions, or on the other hand, a more diverse, multipolar representation in model choices, to name a few. Perhaps, what is crafted from our hand signals the Age of Sail that we are yet to know – where it'd be too much for an epic Argo from the fanta-seas to explore. Future Directions Our findings depict the most dramatic reshuffles of models, while the nuance of LMs is not limited to the model level. The scope can extend to the broader, underlying architecture level, as readers might have noticed the dominance of the Transformer models. Conversely, it is also intriguing to zoom in to the variations of models, as models are eventually instantiated by specific versions just like LMs are resolved to models. The understanding of scientific progress can also benefit from various other genres. Causal links between our work and the identified key factors in prior works could draw a more vivid picture of the field's trajectories. Qualitative studies, such as interviews with the paper authors, can also be an informative complement that delves deeper into our beliefs and explores where the patterns in different groups come from. Aside from the LM-specific discussions, we also highlight the broad existence of the Ship of Theseus scenario in various other terms, topics, and fields. § LIMITATIONS While our methodologies can be naturally applied to similar scenarios, we would like to note that the current analyses and implementations have limitations. First, although we have gone over various findings in recent years with the most dramatic advances of LMs, a holistic overview of a research topic, let alone an entire field, is not covered by the scope of a few years. It would be meaningful if these most rapid changes could be connected with the decades of conceptualization and exploration preceding the engineering breakthroughs. Similarly, using main conference papers at the major international venues as a proxy of the NLP community has its limitations. We encourage future works to take broader consideration of the essential contributions that are less represented by the relatively convergent selections of such venues, e.g., regional conferences on dialect or indigenous language. More broadly, text-based methods alone are not sufficient to cast the intricate dynamics of science. Scientific communities are not mere carriers of printed works, and the influence of a language model or a paper is far beyond academic language use. Various other important factors and impacts should be considered for a comprehensive description of scientific progress and any specific scientific products: the status quo of subfields, monetary and environmental costs of implementation, societal impacts and the public's perception, etc. § ETHICAL CONSIDERATIONS Our data is collected from the ACL Anthology on the terms of Creative Commons 4.0 BY (Attribution) license, which allows unlimited reproduction, distribution, and hosting of materials for non-commercial purposes[See https://creativecommons.org/licenses/by/4.0/CC BY 4.0 and https://aclanthology.org/faq/the Anthology's copyright statement for more information.]. The authors report no other potential ethical considerations. § THE DATASET CONSTRUCTION PIPELINE We provide more details about the semi-automated dataset creation process: the full list of models involved, and the prompts used for the automated part of model name identification. The full dataset and more details will be published for future use in the final version of this project. §.§ Full list of models ChatGPT, GPT-3, GPT-4, BERT, T5, GPT-3.5, GPT-2, LLaMA, RoBERTa, PaLM, CLIP, BART, XLM-R, Alpaca, BLOOM, mT5, InstructGPT, mBERT, GPT-J, Flan-T5, OPT, Codex, COMET, ELECTRA, Longformer, mBART, SimCSE, BLOOMZ, BigBird, BLIP, DeBERTa, CodeT5, Switch Transformer, Vicuna, T0, PEGASUS, LSTM, ALBERT, DPR, Macaw, LXMERT, SpanBERT, TinyBERT, ViLBERT, TransE, RotatE, XLM, Linformer, kNN-LM, kNN-MT, REALM, RETRO, GraphCodeBERT, Sentence-BERT, RNN, HyperCLOVA, CodeGen, Dolly, Pythia, LaMDA, FLAN, BLIP-2, XLNet, GPT, ELMo, BioBERT, DialoGPT, RemBERT, PaLM 2, DistilBERT, SciBERT, ClinicalBERT, M2M100, GloVe, LASER, word2vec, fastText, LaBSE, CNN, wav2vec, UNITER, MASS, MT-DNN, BlenderBot, DistMult, OFA, CMLM, HRED, ERNIE, ConveRT, MiniLM, Galactica, RuleTakers, Claude, LayoutLM, ST-DNN, IRNet §.§ Prompts for automated model name extraction We use GPT-4-turbo as our base LLM to identify potential names from paper abstracts and incorporate in-context examples <cit.>. A request consists of two parts of inputs: a static System instruction, and individual User Inputs for each request. An example use case is shown below: System You are an assistant with excellent expertise in searching through academic text. You will be given the Abstract of an academic paper in the field of Natural Language Processing. Your task is to retrieve whether the authors mention that they mentioned some *specific* language model in their writing. And if so, you need to accurately find the names of all such models. Important note 1: "LLM" and "PLM" are not model names, they refer to the generic terms of "Large Language Model" and "Pretrained Language Model". Important note 2: Do not include any models that are proposed by the authors themselves. For instance, if a paper says "we propose a new model, GPT-OURNEW, which performs better than GPT-3", your answer should only include "GPT-3" and not "GPT-OURNEW". Return all the specific model names (don't miss out any), separated by a comma. If you believe you didn't see any model name, simply return "None". Only respond with the comma-separated model names. Do not include any other text in your response!!! Some examples: Input: This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated English-generated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency. Output: Dolly-v2,StableVicuna,ChatGPT,GPT-4,mBERT,XLMR Input: Large Language Models (LLMs) have showcased impressive performance. However, due to their inability to capture relationships among samples, these frozen LLMs inevitably keep repeating similar mistakes. In this work, we propose our Tuning-free Rule Accumulation (TRAN) framework, which guides LLMs in improving their performance by learning from previous mistakes. Considering data arrives sequentially, LLMs gradually accumulate rules from incorrect cases, forming a rule collection. These rules are then utilized by the LLMs to avoid making similar mistakes when processing subsequent inputs. Moreover, the rules remain independent of the primary prompts, seamlessly complementing prompt design strategies. Experimentally, we show that TRAN improves over recent baselines by a large margin. Output: None Input: Dialogue State Tracking (DST) is of paramount importance in ensuring accurate tracking of user goals and system actions within task-oriented dialogue systems. The emergence of large language models (LLMs) such as GPT3 and ChatGPT has sparked considerable interest in assessing their efficacy across diverse applications. In this study, we conduct an initial examination of ChatGPT’s capabilities in DST. Our evaluation uncovers the exceptional performance of ChatGPT in this task, offering valuable insights to researchers regarding its capabilities and providing useful directions for designing and enhancing dialogue systems. Despite its impressive performance, ChatGPT has significant limitations including its closed-source nature, request restrictions, raising data privacy concerns, and lacking local deployment capabilities. To address these concerns, we present LDST, an LLM-driven DST framework based on smaller, open-source foundation models. By utilizing a novel domain-slot instruction tuning method, LDST achieves performance on par with ChatGPT. Comprehensive evaluations across three distinct experimental settings, we find that LDST exhibits remarkable performance improvements in both zero-shot and few-shot setting compared to previous SOTA methods. The source code is provided for reproducibility. Output: ChatGPT User Input Input: We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4. Output: Agent Response BERT,Vision Transformer,LLaMA-13B § SUNBURST GRAPHS The following pages show the Sunburst graphs for all 10 conferences in chronological order.
http://arxiv.org/abs/2407.01808v1
20240701211348
Toward Wireless System and Circuit Co-Design for the Internet of Self-Adaptive Things
[ "Diptashree Das", "Mohammad Abdi", "Minghan Liu", "Marvin Onabajo", "Francesco Restuccia" ]
eess.SY
[ "eess.SY", "cs.SY" ]
fancy Toward Wireless System and Circuit Co-Design for the Internet of Self-Adaptive Things Diptashree Das, Mohammad Abdi, Minghan Liu, Marvin Onabajo and Francesco Restuccia Department of Electrical and Computer Engineering, Northeastern University, United States July 8, 2024 ======================================================================================================================================================================================= § ABSTRACT The deployment of a growing number of devices in iot networks implies that uninterrupted and seamless adaptation of wireless communication parameters (e.g., carrier frequency, bandwidth and modulation) will become essential. To utilize wireless devices capable of switching several communication parameters requires real-time self-optimizations at the rfic level based on system level performance metrics during the processing of complex modulated signals. This article introduces a novel design verification approach for reconfigurable RFICs based on end-to-end wireless system-level performance metrics while operating in a dynamically changing communication environment. In contrast to prior work, this framework includes two modules that simulate a wireless channel and decode waveforms. These are connected to circuit-level modules that capture device- and circuit-level non-idealities of RFICs for design validation and optimization, such as transistor noises, intermodulation/harmonic distortions, and memory effects from parasitic capacitances. We demonstrate this framework with a receiver (RX) consisting of a reconfigurable complementary metal-oxide semiconductor (CMOS) low-noise amplifier (LNA) designed at the transistor level, a behavioral model of a mixer, and an ideal filter model. The seamless integration between system-level wireless models with circuit-level and behavioral models (such as VerilogA-based models) for RFIC blocks enables to preemptively evaluate circuit and system designs, and to optimize for different communication scenarios with adaptive circuits having extensive tuning ranges. An exemplary case study is presented, in which simulation results reveal that the LNA power consumption can be reduced up to 16x depending on system-level requirements. System-level validation, adaptive wireless systems and circuits, hardware/software co-design, energy-aware optimization, simulation-based system testing. § INTRODUCTION fancy By 2030, over 50 billion devices will be absorbed into the iot <cit.>. The sheer number of iot devices implies that continuous and seamless adaptation of wireless communication parameters (e.g., carrier frequency, bandwidth and modulation) will become essential. One of the wireless system design goals is to provide sensing services for a plethora of applications. However, the implementations are constrained because sensing and communication circuit parameters have to be optimized for different frequency bands, modulation schemes and channel conditions <cit.>. For this reason, reconfigurability and self-optimization will be a cornerstone of iot networking paradigms <cit.>. Furthermore, it becomes increasingly important to jointly simulate circuit and system level components for design optimization and validation prior to the fabrication of chips. On the other hand, conventional rfic are still statically optimized, which does not allow for real-time self-optimization at the intersection of hardware and software. Low-power rfic are typically designed and optimized specifically for the worst-case scenario of a given communication standard, which leads to performance limitations and excessive power consumption. Furthermore, circuit-level tuning usually optimizes block-level performance. Conversely, in real systems, the circuit-level linearity and dynamic range requirements strongly depend on the presence of nearby interference signals and bias conditions <cit.>, while slow-varying aspects such as temperature sensitivity or device-level aging effects can only be computed and compensated throughout the device lifetime <cit.>. Digitally-controlled calibration is a popular design approach to improve the performance and testability of mixed-signal integrated circuits <cit.>. However, RFIC calibrations of multiple interconnected circuit blocks typically do not address the interdependence of the circuit-specific parameters during tuning, which creates limitations during simulations for design validations. They normally also do not account for system-level parameters such as ser or throughput, particularly when relying on single-tone/two-tone signals or other alternative test signals instead of the actual modulated signals <cit.>. A comprehensive survey about the integration of ml into integrated circuit design has been provided in <cit.>. The use of ml during design and optimization can be another potential method to ensure functionality under consideration of interdependence between circuit and system parameters. Considering the above-mentioned challenges and opportunities, we propose a new RFIC co-design and validation paradigm at the intersection of hardware and software, which is summarized in Fig. 1. Our joint simulation framework considers system-level performance metrics to facilitate the design of adaptive rfic circuits. The developed interoperability between different design tools allows to (i) generate arbitrary modulated waveforms while modeling different wireless channel conditions, (ii) utilize foundry-supplied models that capture transistor-level non-idealities, (iii) simulate both behavioral analog RFIC blocks and circuit level designs, and (iv) extract circuit-level and system-level performance metrics. Furthermore, we introduce simulations with modulated signal packets and modeled channel impairments to extract system-level parameters such as ber and evm; and accordingly optimize several circuit conditions (e.g., gain, noise figure and linearity characteristics) for optimization of energy/performance tradeoffs. As a use-case scenario, we leverage our framework to optimize a receiver (RX) composed of a reconfigurable cmos lna designed at the transistor-level, a behavioral model of a mixer and an ideal filter model. The simulation results show that our framework can reduce the lna power consumption by up to 16x under varying ber requirements. In general, the simulation framework can be used as a tool during the design and validation of adaptive wireless RXs that operate with dynamically changing requirements. § EXISTING WORK AND CURRENT CHALLENGES The fast-changing IoT ecosystem leads to a very dynamic nature of the wireless channel that calls for complex hardware and software systems, including adaptive and tunable transceiver designs. It has been explained in <cit.> how tunable and reconfigurable radio frequency (RF) technologies provide potential solutions for efficient spectrum sharing. In <cit.>, a simulation-based approach develops a multi-user IoT communication system by taking into account carrier synchronization and data broadcasts on multiple channels for several of low‐power devices present in the network. The work in <cit.> realized a learning-based RF signal classifier on a field-programmable gate array (FPGA) to reduce latency and power consumption, which requires prior knowledge of signals and spectrum. Adaptability in the RX front-end opens up the opportunity to collect and process data from a dynamic wireless channel. CMOS RFIC prototypes have been designed to enhance linearity and power handling requirements for cellular applications <cit.>. However, RFICs can still be complemented with real-time adaptation algorithms to optimize transceiver operation. <cit.> introduces a real-time two-dimensional real-time adaptation method to configure a RX for optimum NF and linearity with a certain power budget and desired signal level. The design of adaptive wireless RXs with single or multi-parameter optimization is highly relevant for specific incoming signals, and requires validating performance for different wireless standards, channel conditions, and chip-level performance variations (e.g., CMOS fabrication process variations). A key consideration during the design of adaptive RFICs is that reconfigurability is tightly coupled with power consumption. In addition, the need for wide tuning range and energy/power scalable designs calls for the seamless combination of circuit and system level adjustments. An evaluation of performance versus power trajectory for RF front-end functional blocks has been explored in <cit.>. In addition to determining the inter-dependencies of circuit parameters for each block in the RF front-end, the optimization of analog RF circuits based on feedback control with digitally-controlled features has been demonstrated <cit.>, which requires to design complex control strategies for different conditions in the presence of channel and device level variations. The problem of energy efficiency and channel conditions is conventionally controlled by adaptive modulation and coding <cit.>, making it harder for the RF front-end to adapt to any changes in the channel. Integrating tunable RFICs into spectrum-agile wireless networks can allow to self-optimize RF circuit parameters to produce a desired output signal within the optimum power budget based on existing channel conditions. <cit.> describes a channel-adaptive RX design with process variation tolerance. Furthermore, a neural network based self-learning RF system has been demonstrated in <cit.>, which is able to reduce power consumption of wireless transceiver systems by dynamically tuning the circuit components while monitoring the effects of real-time wireless channel conditions and the fabrication process variations to produce a desired ber and threshold evm. This is achieved with an on-chip look-up table that requires to be updated based on expected channel conditions. The difficulty of simulating the entire system is a major impediment to verify the merits of the integrated hardware-software based wireless system. This work aims to design and validate a joint simulation platform that can address the inter-dependencies of circuit parameters in the RF front-end to develop self-optimized wide-range reconfigurable RX architecture together with wireless network that is capable of changing communication parameters. Furthermore, this approach to incorporate and verify system-level performance-driven tuning features using reconfigurable RFIC blocks is especially compelling to enhance resilience to sudden changes in the environment and accordingly optimize to achieve performance targets with optimum power consumption. Hence, the presented simulation framework is expected to ease the adaptive design and optimization of closed-loop self-supervised Internet of Self-adaptive Things with different modulated signals, wireless channel models and adaptive RFIC designs together with circuit level non-idealities. § OPTIMIZATION FRAMEWORK OVERVIEW We consider a scenario as depicted in Fig. 1, where we model multipath effects and dynamic fading along with arbitrary modulated signal packets during the design of robust RFIC adaptability to receive and process information. Here, the waveform generation allows to change the modeled phy parameters (such as modulation scheme, power carried by the spectrum components, RF sampling frequency, channel coding and bandwidth). Currently, only the snr is used as a primary indicator of variable channel conditions. The variable SNR-based model encompasses several channel impairments such as additive white Gaussian noise (AWGN), multi-path fading, variable distance between transmitter (TX) and RX, and path-loss among other interference as indicated in Fig. 1. The sensitivity requirement of the RF front-end strongly depends on the SNR. <cit.> includes a description of typical SNR values for channels, where the comparative results with different modulation schemes provide insights into the expected SNR values for various channel conditions. It is also envisioned that future spectrum-agile TXs will lead to more variations of interference levels, SNR values, and modulation schemes. As depicted in Fig. 1, the baseband waveform is processed to infer parameters associated with system-level performance such as ber, ser, EVM, mer and per. This approach is based on the goal to develop algorithms for accurate data-driven optimization of RFICs based on system performance. Next, we present an architecture with a reconfigurable RF front-end circuit that is evaluated through the simulation framework from this work. § DESIGN AND MODELING OF RF FRONT-END CIRCUITS WITH DYNAMIC RECONFIGURABILITY §.§ Flexible RF Front-end Architecture Analog RF front-end reconfigurability enhances co-existence and spectrum sharing in crowded environments <cit.>, as well as allows to vary parameters such as data rates on demand <cit.>. As shown in Fig. 2, the reconfigurable RF RX front-end in this work consists of a digitally programmable LNA circuit designed at the transistor-level with tunable bias current, a behavioral model of a direct down-conversion in-phase/quadrature (I/Q) mixer stage, and two ideal low-pass filter (LPF) models in the I and Q paths. Circuit-level simulations with device-level non-idealities can capture impacts of frequency response limitations, inter-modulation products, thermal and flicker noises, parasitic capacitances/resistances, and higher-order non-linearities of the transistors, that allows block-level specifications assessment such as gain, noise figure (NF), input third-order inter-modulation intercept point (IIP3) and impedance matching conditions. At the same time, the ability to include some behavioral models of circuit blocks aids the early design and system-level verification phase. Most importantly, the accurate transient simulations allow to account for the impacts of circuit-level imperfections to assess system-level metrics with changing environment. To overcome existing inflexible wireless standards, inefficient spectrum use and potential security threats in the wireless network, the flexible adaptation of phy parameters of the signal proves an effective and long-standing solution. The work in <cit.> demonstrated if TXs were allowed to dynamically switch phy parameters such as carrier frequency and symbol modulation, the TXs would become less jamming-prone and achieve more efficient spectrum occupation. To give an example, Fig. 2 shows the selected test signal with a format that corresponds to the Zigbee phy packet structure. This signal with any modeled channel impairments has been applied as input signal during circuit and behavioral simulations of the RF front-end. The frame starts with a known preamble for synchronization, which exhibits high auto-correlation and low cross-correlation features. The preamble is followed by a start-of-frame delimiter that marks the beginning of the header. The header consists of the frame length in bytes and the modulation code associated with the modulation scheme used. A data checksum (CSC) is attached to the header and data parts respectively, such that erroneous frames can be detected and discarded. The data part of the frame can support a MAC Protocol Data Unit (MPDU) with a size of up to 2^8-1 bytes. As depicted in Fig. 2, the emulated transmitted packets with added channel noise and imperfections are transferred to the signal source of a circuit simulator (Cadence Spectre) for the RF front-end simulation. The model-based design simulator (Matlab) is capable of saving the raw data in the comma-separated values (CVS) file format, which has been incorporated into the circuit simulator by the virtue of a piece-wise linear signal source. Thus, the reconfigurability settings and biasing conditions of the RF front-end blocks can be evaluated during the circuit design phase to optimize characteristics such as noise levels, linearity, RX sensitivity and impedance matching conditions. As mentioned earlier, the mathematical analysis of the circuit simulator output provides system-level performance metrics (such as ber, ser, mer, and evm) for both circuit and system-level optimization with modulated signals. Here, the ber is considered as one of the most significant phy performance indicators. Since not all inaccuracies lead to bit flips, evm is another appropriate measure to quantify the quality of the received signal after processing in the RF front-end, which allows to capture important channel and RX non-idealities <cit.>. Various imperfections such as changing channel conditions have impacts on the evm since they can cause the received constellation points to deviate from their original ideal locations. §.§ Reconfigurable LNA A 2.4 GHz single-ended cascode common-source LNA with inductive source degeneration has been selected as an exemplary reconfigurable narrowband LNA design, of which the bias current is tuned to control performance/power tradeoffs as shown in Fig. 2. Since the LNA is the first block of the RF front-end, its performance is particularly crucial when receiving noisy packets at low power levels. The standalone LNA was designed in a standard 65nm CMOS technology, and simulated for bias currents ranging from 31.25 μA to 500 μA. By changing the bias current +/-20% to +/-50% from its design point (125 μA), the circuit characteristics such as gain, IIP3, and NF can be adjusted with a corresponding change of the power consumption that is directly proportional to the bias current. Fig. 3 summarizes the gain, NF and IIP3 of the LNA from transistor-level simulations. §.§ Mixer and Baseband Signal Processing To portray the capability of combining transistor-level schematic simulations (i.e., the LNA) with behavioral blocks during transient circuit simulations to assess system-level metrics under specified/changeable wireless channel conditions, a direct down-conversion I/Q mixer has been modeled in VerilogA, followed by two ideal LPFs (one in each RX path). The mixer model includes variable gain, IIP3 and NF based on typical reconfigurable circuit parameters of down-conversion mixers. In this proof-of-concept, the performance assessment of the RF front-end was with QPSK-modulated signals and channel impairments according to the signal generation in Fig. 2. The channel model accounts for AWGN, path-loss and generic frequency-selective multi-path fading that can introduce different path attenuation, delay, and Doppler shift. The baseband demodulator after the LPF is also implemented mathematically, where according to the decision boundaries defined by the associated constellation, the I/Q samples are detected and compared against the ground-truth I/Q samples to compute the desired system-level parameters. Future work will be devoted to automatic parameter tuning parameters in the RF front-end circuits based on the extracted system-level performance metrics to optimize with varying spectrum conditions under specified power consumption targets. § CASE STUDY: RECONFIGURABLE RF FRONT-END SIMULATION This section summarizes results from the use of our co-simulation framework for the example RF front-end configuration described in the previous section. The simulations were primarily carried out to evaluate the performance vs. power trade-offs associated with the reconfigurable LNA design. As mentioned in the previous section, the complete testbench used for the simulations includes a VerilogA based mixer and ideal LPF. The mixer has been modeled with flexible circuit parameters in which the gain, IIP3 and NF can be changed as part of the design exploration. In this work, we have used an ideal behavioral mixer model with a gain of 10 dB, IIP3 of 5 dBm and NF of 10dB for the proof-of-concept simulations <cit.>. The upper left image in Fig. 4 displays the generated QPSK-modulated signal with the packet structure defined in Fig. 2. A binary ground-truth message is randomly generated and up-converted to produce the QPSK modulation with a center frequency of 2.4 GHz. The SNR of the received signal was selected as 20 dB to emulate typical wireless network conditions with channel impairments and distortion <cit.>. The corresponding waveform of the generated QPSK signal in Fig. 4 is corrupted by the channel imperfections and noise, and fed to the RF front-end. The simulated I/Q signals at the LPF outputs are shown on the right side of Fig. 4. These I/Q signals are transferred for mathematical baseband processing using an envelope detector and moving average filter (as displayed in Fig. 2) during model-based simulations. The demodulated I/Q samples are then compared and verified with the ground-truth data to extract ber, ser, evm and mer. The lower values of the ber and ser relust from high accuracy of demodulation process. In this case, the BER and SER are in the range of 10^-5-10^-3 for all LNA bias current conditions. On the other hand, the MER values are 18.9 dB and 11.2 dB for LNA bias currents of 500 μA and 31.25 μA respectively. The proximity of the MER value to the specified channel SNR (20 dB) is an indication of a noise resilient system, which results from the LNA bias with high current (i.e., high power consumption) to achieve a low NF. Fig. 4 includes the extracted constellations of the QPSK signal after the processing by the RF front-end with the highest and lowest LNA bias currents, which also show the power vs. performance tradeoff. Table I includes an overview of the system-level performance for different LNA bias current settings. We have simulated two QPSK packets (4272 bits) with randomly generated ground-truth data to evaluate the high-level system performance with the co-design platform. It can be seen from Table I that the BER, SER and EVM are considerably lower for LNA bias currents in the 125 μA to 500 μA range. In addition, no packet errors (PEs) occurred in the 125 μA to 500 μA bias current range. The simulation results show robust adaptability, which will be realized with an application-specific feedback control loop as depicted in Fig. 1. Depending on the application-specific ber requirement and channel conditions, energy consumption can be significantly reduced through the 62.5 μA and 31.25 μA bias current settings. § NEXT STEPS: BEYOND TRADITIONAL WIRELESS SYSTEM AND RFIC INTEGRATION We foresee that the research presented in this paper will be the foundation for the development of reconfigurable RXs with real-time self-optimization capabilities. The work described in this article is the first step associated with the co-design and verification of wireless systems and circuits that can tolerate and adapt to interference conditions using novel RFIC optimization methods with unprecedented design flexibility based on specified system-level performance metrics. As depicted in Fig. 5, we anticipate that the presented co-simulation and design verification framework will contribute to the development of several novel features: (i) ml-based self-decisive CMOS RF front-ends to adapt changing network conditions, (ii) development of hardware-software prototypes based on joint integrated circuit simulations and wireless data collection for enhanced modeling, (iii) several digitally-controlled tuning knobs in each analog block as indicated in Fig. 1, and (iv) collection of waveform datasets through the experimental testbenches to train ML algorithms. (v) Once the ml algorithms are developed, the drl agent will be trained to collect data, both experimentally and synthetically utilizing the proposed framework. During the data collection, one can deploy the optimal policy on FPGA-based platforms such as sdr <cit.> to meet the challenging time constraints involved, and to reduce the overall power consumption. The presented simulation framework features tools to jointly optimize the power-efficiency of digitally-controlled analog circuits and the computation resources to implement adaptive ML-based control. Once the envisioned ml algorithm is developed and the drl agent is trained experimentally and synthetically collected data using the proposed framework, one could deploy the optimal policy on FPGA-based platforms such as sdr <cit.> so as to meet the challenging time constraints involved. To further minimize the power consumption introduced by running the ml method, one can change the operational frequency or even limit it to the times when the system-level performance drops below a certain threshold or experiences a sudden change. The simulation results in Section V show that the RF front-end in this case study has a sensitivity of -100 dBm. We have presented a case study with a QPSK packet, but other modulation schemes (e.g. ASK, BPSK) with different SNR values, data rate, bandwidth, center frequencies can be employed. The reconfigurable RX design and simulation approach is intended to facilitate the use of flexible wireless system parameters by reconfiguring circuit parameters for the optimum performance and spectrum sharing in real-time. § CONCLUSION This article presented a new co-design simulation platform to evaluate trade-offs between performance and power consumption by integrating the design and modeling of wireless systems and reconfigurable RF front-end circuits together. A main contribution of the integrated framework is to provide an effective and long-lasting tool for the design of spectrum-agile RXs with adaptive RFICs for dynamic optimizations under changing environmental conditions. The execution of the joint summation taking into account combining dynamic wireless channel model and reconfigurable RF front-end with circuit level non-idealities accounts for incorporation of several modeling and design software to validate the performance. The reconfigurability of the RF front-end aims to reduce power consumption significantly based on application-specific system-level performance targets. However, the primary goal is to execute the seamless adaptation of the RF front-end by optimizing its circuit parameters during real-time execution with spectrum-agile transmission to produce desired end-to-end system level performance. The integration of digital tuning capabilities in each of the analog blocks within RFICs will be particularly useful for realizing future wireless system paradigms with an unprecedented degree-of-freedom. § ACKNOWLEDGEMENT This material is in part based upon work supported by the NSF under grant no. ECCS-2146754, CCF-2218845, CNS-2134973, CNS-2120447, ECCS-2229472, and is supported in part by funds from OUSD R&E, NIST, and industry partners as specified in the Resilient & Intelligent NextG Systems (RINGS) program, as well as by the Air Force Office of Scientific Research under contract number FA9550-23-1-0261, and by the Office of Naval Research under award number N00014-23-1-2221. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements of the NSF and the U.S. Government. IEEEtran
http://arxiv.org/abs/2407.02538v1
20240701232405
CGRclust: Chaos Game Representation for Twin Contrastive Clustering of Unlabelled DNA Sequences
[ "Fatemeh Alipour", "Kathleen A. Hill", "Lila Kari" ]
q-bio.GN
[ "q-bio.GN", "cs.LG", "F.2.2, I.2.7" ]
[1]Fatemeh Alipourfalipour@uwaterloo.ca 2]Kathleen A. Hillkhill22@uwo.ca 1]Lila Karilila.kari@uwaterloo.ca *[1]School of Computer Science, University of Waterloo, Canada [2]Department of Biology, University of Western Ontario, Canada Background: Traditional supervised learning methods applied to DNA sequence taxonomic classification rely on the labor-intensive and time-consuming step of labelling the primary DNA sequences. Additionally, standard DNA classification/clustering methods involve time-intensive multiple sequence alignments, which impacts their applicability to large genomic datasets or distantly related organisms. These limitations indicate a need for robust, efficient, and scalable unsupervised DNA sequence clustering methods that do not depend on sequence labels or alignment. Results: This study proposes CGRclust, a novel combination of unsupervised twin contrastive clustering of Chaos Game Representations (CGR) of DNA sequences, with convolutional neural networks (CNNs). To the best of our knowledge, CGRclust is the first method to use unsupervised learning for image classification (herein applied to two-dimensional CGR images) for clustering datasets of DNA sequences. CGRclust overcomes the limitations of traditional sequence classification methods by leveraging unsupervised twin contrastive learning to detect distinctive sequence patterns, without requiring DNA sequence alignment or biological/taxonomic labels. CGRclust accurately clustered twenty-five diverse datasets, with sequence lengths ranging from 664 bp to 100 kbp, including mitochondrial genomes of fish, fungi, and protists, as well as viral whole genome assemblies and synthetic DNA sequences. Compared with three recent clustering methods for DNA sequences (DeLUCS, iDeLUCS, and MeShClust v3.0.), CGRclust is the only method that surpasses 81.70% accuracy across all four taxonomic levels tested for mitochondrial DNA genomes of fish. Moreover, CGRclust also consistently demonstrates superior performance across all the viral genomic datasets. The high clustering accuracy of CGRclust on these twenty-five datasets, which vary significantly in terms of sequence length, number of genomes, number of clusters, and level of taxonomy, demonstrates its robustness, scalability, and versatility. Conclusion: CGRclust is a novel, scalable, alignment-free DNA sequence clustering method that uses CGR images of DNA sequences and CNNs for twin contrastive clustering of unlabelled primary DNA sequences, achieving superior or comparable accuracy and performance over current approaches. CGRclust demonstrated enhanced reliability, by consistently achieving over 80% accuracy in more than 90% of the datasets analyzed. In particular, CGRclust performed especially well in clustering viral DNA datasets, where it consistently outperformed all competing methods. CGRclust: Chaos Game Representation for Twin Contrastive Clustering of Unlabelled DNA Sequences [ July 8, 2024 ========================================================================================================= § BACKGROUND DNA sequence classification is essential for genomic analyses, contributing to the identification of evolutionary relationships, functional elements, and genetic variants, through the detection of sequence similarity. Conventional methods for classifying DNA sequences typically depend on labor-intensive and expert-mediated labelling of primary DNA sequences to determine sequence origin, function, and type. Furthermore, the stability of genome labels can be questioned, as taxonomic labels are not always definitive due to the absence of a clear taxonomic “ground truth” <cit.>. Moreover, most traditional DNA sequence classification and clustering methods are alignment-based. The time complexity of DNA sequence alignment <cit.>, coupled with a dependence on additional sequence information such as sequence homology <cit.>, makes these methods unsuitable for analyzing large or evolutionarily divergent genomic datasets. These challenges emphasize the importance of developing robust and flexible alignment-free unsupervised approaches to DNA sequence classification that do not rely on DNA sequence labels, annotation, or alignment. In 1990, Jeffery introduced a visual representation of DNA sequences, called Chaos Game Representation (CGR) <cit.>. CGR builds on Barnsley's 1988 algorithm <cit.> to map one-dimensional DNA sequences into two-dimensional images, using chaotic dynamics. Several studies <cit.> have demonstrated that CGRs can act as genomic signatures, defined by Karlin and Burge <cit.> as numerical quantities that can distinguish closely from distantly related organisms based on DNA sequence identity. The distance between CGRs of DNA sequences can be computed using various metrics, e.g., Euclidean distance, and can then be used for alignment-free comparisons and phylogeny construction to demonstrate evolutionary relationships within a group of organisms. Due to these properties, CGR has been considered a milestone in graphical bioinformatics <cit.>. Frequency CGR (FCGR) is a quantified variant of CGR: An FCGR at resolution k is a 2^k × 2^k grayscale image, wherein pixel intensities indicate k-mer frequencies. Visual FCGR representations of DNA sequences have been used in many alignment-free genome comparison applications, overcoming the quadratic runtime and scalability problems associated with alignment-based methods <cit.>. Furthermore, the use of FCGR permits alignment-free genomic sequence comparisons, when used in conjunction with digital signal processing techniques <cit.> and machine learning methods <cit.>. FCGR's ability to convert variable-length sequences into fixed-size dimensions is a key capability for machine learning, especially in DNA classification using convolutional neural networks (CNNs) <cit.>. In <cit.>, CNNs outperformed Support Vector Machines (SVMs) in classifying FCGR images of bacterial 16S gene sequences for both full-length sequences and 500 bp fragments. In <cit.>, a simple CNN achieved an accuracy of 87% in classifying FCGRs of 660 DNA sequences across eleven genomic datasets. In 2023, Avila et al. effectively classified SARS-CoV-2 DNA sequences into eleven clades using FCGR and CNNs <cit.>, achieving 96.29% accuracy utilizing a ResNet50 neural network <cit.> and outperforming Covidex <cit.>, a random forest-based clade assignment tool. A hybrid CGR-based approach for detecting COVID-19 was introduced in <cit.>, analyzing both whole and partial genome sequences of 7,951 human coronaviruses using AlexNet, Lasso algorithm, and KNN classifier. In spite of the effectiveness of these DNA classification methods, their reliance on labelled data is a significant limitation which highlights the urgent need for unsupervised algorithms that can perform well without the need of DNA sequence labels. To address this gap, dense neural networks have been used in conjunction with FCGR for the unsupervised clustering of DNA sequences in large, diverse datasets (up to 9,027 genomes) across different taxonomic levels and genetic distances <cit.>. However, in both DeLUCS <cit.> and iDeLUCS <cit.>, the neural networks first flattened each two-dimensional FCGR into a one-dimensional k-mer frequency vector. As a result, the features of two-dimensional FCGR images were not fully exploited in these methods. Another approach to clustering unlabelled DNA sequences, MeShClust v3.0 <cit.>, used the mean-shift algorithm for generating pairwise identity scores without alignment. MeShClust v3.0 is built on its predecessors, MeShClust v1.0 <cit.> (a DNA clustering method) and Identity <cit.> (a sequence alignment identity score predictor), and can efficiently cluster both long sequences, up to 3.7 million basepairs, and large datasets containing up to a million sequences. MeShClust v3.0 was tested on twenty-seven datasets, including twenty-two synthetic datasets and five real biological datasets, such as the human microbiome and maize transposons. In spite of this progress, in <cit.> it was shown that DeLUCS, iDeLUCS, and MeShClust v3.0 underperform in clustering astrovirus sequences when compared to K-means++, even though they were previously validated on other viral datasets. These limitations highlight the need for the development of more robust approaches that can effectively manage the complexities of genetic diversity of a wide range of genomic datasets. This paper presents CGRclust, a DNA sequence clustering method designed to identify discriminative features of DNA sequences, using two-dimensional FCGR images as the input to convolutional neural networks (CNNs), to fully leverage the information in this powerful DNA encoding. The clustering process in this study employs twin contrastive learning (TCL) <cit.>, a method proven effective in clustering images and text, which optimizes two contrastive learning objectives simultaneously—one at the instance-level and another at the cluster-level. CGRclust's accuracy was evaluated across twenty-five datasets against DeLUCS <cit.>, iDeLUCS <cit.>, and MeShClust v3.0 <cit.>. Its clustering capabilities were tested on 2,688 mtDNA genomes of Cypriniformes, as well as five different viral genome datasets, including astroviruses, dengue virus, hepatitis C virus, and HIV-1. Furthermore, CGRclust was also assessed using mtDNA genomes from insects, protists, and fungi <cit.>, along with synthetic DNA sequences <cit.>. All DNA sequences were unlabelled, with their taxonomic labels used solely for post-hoc accuracy evaluation. In summary, CGRclust is a novel, scalable, alignment-free clustering method that uses FCGR images and CNNs, for twin contrastive clustering of unlabelled primary DNA sequences. The main contributions of this paper are: * Being, to best of our knowledge, the first application of twin contrastive learning to the clustering of DNA sequences, without requiring sequence homology, sequence labels, or sequence-length similarity. * Highly accurate clustering of a current dataset of 2,688 unlabelled fish mtDNA assemblies (order Cypriniformes). Clustering was performed at four different taxonomic levels, and CGRclust consistently achieved accuracy greater than 81.70% at all levels. This was either higher than, or comparable to, clustering accuracies of the other state-of-the-art clustering methods (DeLUCS, iDeLUCS, MeShClust v3.0). * Highly accurate clustering of several current datasets of unlabelled viral whole genomes (Astroviridae family into genera; dengue, HCV, HIV-1 species into virus subtypes), with accuracies ranging from 81.77% to 100%, surpassing the other state-of-the-art clustering methods. * Effective handling of challenging cases, such as unbalanced data, and scenarios with a high number of clusters and a small number of samples per cluster. * Superior or competitive accuracies compared to state-of-the-art methods on their benchmark datasets of unlabelled DNA sequences, e.g., 73.56% for insect mtDNA, 85.50% for protist mtDNA, and 97.10% for fungi mtDNA. Furthermore, CGRclust consistently exceeded 92.26% accuracy in clustering unlabelled synthetic DNA sequences of different lengths and identities. § METHODS This section starts with a description of the datasets utilized in this study. This is followed by an overview of the proposed computational pipeline for contrastive clustering of DNA sequences in CGRclust. Chaos Game Representation (CGR), the graphical representation of DNA sequences used in this paper, is then defined, together with its quantified variant FCGR. Next, a description of the data augmentation strategies used for this graphical representation (generation of mimic sequences) is presented, serving as the initial component of CGRclust's pipeline. Afterwards, the core concept of twin contrastive learning, details about the backbone model, and the majority voting scheme adapted to clustering FCGRs of DNA sequences are described. Lastly, details of implementation and testing are provided. §.§ Datasets To comprehensively evaluate the performance of CGRclust in clustering DNA sequences, we strategically selected four groups of datasets, comprising diverse genomic data both real and synthetic. The selection rationale was driven by the need to assess the clustering method across different levels of taxonomy with different degrees of relatedness, genomic conservation, and evolutionary dynamics. The Group 1 dataset includes mitochondrial DNA of fish, while the Group 2 dataset includes viral whole genomes. Additionally, to facilitate direct comparisons with established methodologies, we incorporated datasets previously analyzed in <cit.> and <cit.> (Group 3 and Group 4 datasets, respectively). The Group 1 dataset comprised complete mitochondrial DNA (mtDNA) sequences of Cypriniformes (an order of ray-finned fish). This dataset was retrieved from the National Center for Biotechnology Information (NCBI) on Jan 30, 2024, with a filter selecting mtDNA sequences of length between 4 kbp and 25 kbp. Following the removal of “partial” and “unverified” genomes, 2,688 complete mitochondrial genomes of Cypriniformes were collected. At each taxonomic level, the cluster with the highest number of sequences was selected for the lower taxonomic level clustering task. Due to significant variability and imbalance in the number of available sequences across the four taxonomic levels, sequences from clusters with fewer than 50 sequences were discarded. To address the imbalance, in the first three computational tests (Tests 1-3), we established a threshold based on the minimum number of sequences available in a cluster and randomly selected an equivalent number of sequences from the other clusters. Balancing the clusters was not needed in Test 4, as the dataset was already evenly distributed. Table <ref> summarizes the dataset details for the Group 1 dataset (Cypriniformes mtDNA). The selection of this group of datasets was motivated by the conservative nature of mtDNA, which is predominantly coding and thus provides a stable framework for assessing clustering methodologies at multiple taxonomic levels. The uniformity of high conservation over the mtDNA genome compared to the regional variation in sequence conservation of the nuclear genome, coupled with its wide use in phylogenetic studies <cit.>, makes mtDNA data an ideal candidate for initial clustering evaluations. To further demonstrate the effectiveness of CGRclust, we assessed its performance across five viral whole genome datasets in the Group 2 dataset: an updated version of the virus family Astroviridae genomes analyzed in <cit.> (Test 5) and its balanced version (Test 6), an updated version of whole genomes of dengue virus (Test 7), hepatitis C virus (HCV) (Test 8), and human immunodeficiency virus 1 (HIV-1) (Test 9) previously classified in <cit.> with supervised machine learning methods. Table <ref> outlines the details of the Group 2 dataset. In Test 5, 1,089 complete astrovirus genomes were collected, for taxonomic clustering of the sequences from family to genus level. Test 6 uses a cluster-balanced variant of the astrovirus dataset to address the initial label imbalance, thereby ensuring that the clustering results are not skewed by this disparity. All astrovirus sequences were downloaded from NCBI on April 4, 2024, with a filter selecting genome lengths ranging between 5 kbp and 10 kbp. Furthermore, we addressed the clustering of viral sequences at a lower level of species to subtypes in Tests 7-9. This categorizing which is called viral subtyping is crucial for understanding intraspecific variation, tracking epidemiological trends, and developing targeted treatments or vaccines. The dengue virus sequences used in Test 7 were obtained from <https://www.ncbi.nlm.nih.gov/genomes/VirusVariation/Database/nph-select.cgi?taxid=12637> using the query parameters “Nucleotide”, “Full-length sequences only”, and “Collapse identical sequences”, resulting in a dataset of 5,868 sequences. Following cluster balancing, we obtained a dengue dataset comprising 1,628 dengue virus whole genomes spanning four distinct subtypes. The HCV genomes utilized in Test 8 were sourced from the LANL sequence database, accessible at <https://hcv.lanl.gov/components/sequence/HCV/search/searchi.html>, with the query settings “Excluding recombinants”, “Excluding ‘no genotype‘”, “Genomic region: complete genome”, and “Excluding problematic”, resulting in 3,612 whole HCV genomes. After removing clusters with less than 100 sequences and balancing the dataset, we obtained 950 full HCV genomes spanning five different subtypes. Finally, the HIV-1 genomes in Test 9 were retrieved from the Los Alamos (LANL) sequence database, accessible at <https://www.hiv.lanl.gov/components/sequence/HIV/search/search.html> with query parameters “virus: HIV-1, genomic region: complete genome, excluding problematic,” which resulted in a dataset comprising 20,525 HIV-1 full genomes. We then removed HIV-1 subtypes with fewer than 100 sequences and balanced the remaining subtypes, thus obtaining a dataset comprising 13,000 HIV-1 whole genome sequences spanning 13 subtypes. The three viral datasets used in Tests 7-9 were downloaded on April 1, 2024. Viral genomes are characterized by higher mutation rates and greater evolutionary diversity compared to the mtDNA, presenting distinct challenges for clustering algorithms. This variability tests the robustness and adaptability of CGRclust under conditions of rapid genomic changes and diverse evolutionary pressures. Next, we evaluated the performance of CGRclust on three core datasets used in <cit.> (Group 3 dataset: mtDNA of Insects, Protists, and Fungi), as well as 12 synthetic DNA datasets analyzed in <cit.> (Group 4 dataset: synthetic sequences). Including these datasets allowed for direct comparisons with existing studies, providing benchmarks against established clustering methods. The Group 3 dataset is described in Table <ref>. Note that, given the observed mixed taxonomic levels used in <cit.> for clustering the Fungi dataset, and the fact that both subphyla “Pezizomycotina” and “Saccharomycotina” belong to phylum Ascomycota, we divided this clustering task into two parts, Tests 12 and 13. The first task (Test 12) involved clustering kingdom Fungi into phyla “Ascomycota” and “Basidiomycota”, while the second task (Test 13) focused on clustering phylum Ascomycota into subphyla “Pezizomycotina” and “Saccharomycotina”. Details about the Group 4 dataset <cit.> are presented in Table <ref>. The sequence lengths of six datasets, each beginning with the prefix “Medium-” range between 653 and 2,062 bp, while the other six datasets, prefixed with “Long-”, span from 1,393 to 4,049 bp. The numerical values ranging from 60 to 97 in the dataset labels represent the identity score, a measure of designed relatedness determined by the ratio of identical nucleotides in two sequences relative to the alignment length (including gaps). These synthetic sequences, designed with different sequence lengths and identity score thresholds, evaluate the performance of CGRclust under controlled, and different conditions. For further details on Group 3 and Group 4 datasets, the reader is referred to, <cit.>, and <cit.>, respectively. §.§ Method overview The contrastive clustering method proposed in this paper, CGRclust, utilizes a quantified variant of CGR, a graphical encoding of DNA sequences introduced by Jeffrey in <cit.>. This quantified DNA encoding, referred to as FCGR, represents a DNA sequence at resolution k as a two-dimensional unit square image. In an FCGR, the intensity of each pixel signifies the frequency of a particular k-mer in the input DNA sequence <cit.>. To capture the positional information (location of points) within FCGR images, a CNN model was integrated into the pipeline. CGRclust enhances the clustering performance by leveraging unsupervised contrastive learning. Contrastive learning is a powerful technique that can learn informative representations by comparing how similar or different pairs of examples are, rather than relying solely on raw data or labelled examples <cit.>. This approach helps the model understand the underlying structures of the data by pulling similar instances (elements of a so-called “positive pair") closer, while pushing dissimilar ones (elements of a so-called “negative pair") farther apart in the representation space. Here, a positive pair is defined as consisting of two “augmented" versions of an input DNA sequence, called mimic sequences. Mimic sequences are generated by the algorithm from an original DNA sequence so as to be similar to the original, or related to it in a meaningful way. In this context, a negative pair is defined as any other pair of sequences in the dataset. The clustering process in this study takes advantage of the concept of twin contrastive learning (TCL) <cit.>, a method that simultaneously optimizes two contrastive learning objectives, one at the instance-level and another at the cluster-level, as detailed below. Figure <ref> illustrates an overview of the proposed CGRclust pipeline. The pipeline consists of four main components: 1) data augmentation (generation of mimic sequences) for FCGR positive pair construction, 2) backbone model for projection into a latent feature space , 3) instance-level contrastive head (ICH), and 4) cluster-level contrastive head (CCH). The first component is shown in the left panel of Figure <ref>, while the other three components are in the middle panel. Initially, pairs of mimic sequences constructed during the data augmentation phase (pipeline component 1), and assumed to belong to the same cluster, are projected into a latent feature space using CNNs (pipeline component 2). It is important to note that in the training phase, the two mimic sequences constructed from each original sequence were used as members of a positive pair, while the original sequence was used exclusively in the testing phase. Subsequently, the ICH (pipeline component 3) and CCH (pipeline component 4) conduct instance-level and cluster-level contrastive learning. ICH is designed to enhance the similarity of representations of positive pairs in the latent feature space, while making the representations of negative pairs more distinct. On the other hand, CCH's goal is to effectively separate clusters of data points, ensuring that each cluster is distinctly different from the others. The two components (ICH and CCH) are simultaneously optimized through twin contrastive learning (TCL) by operating on the row (ICH) and column (CCH) spaces of the feature matrix, respectively. Through this simultaneous optimization, CGRclust enhances the representation's quality by handling both detailed (in ICH) and broad (in CCH) distinctions in the data, all without relying on pre-defined taxonomic labels. As the training process involves randomized algorithms leading to high variance outcomes depending on the different initializations and random seeds, a majority voting scheme is then employed (right panel of Figure <ref>), which uses the outcomes of five distinct CNN models with different initializations to determine the final cluster assignment for each sequence. To evaluate the quality of the clusters, an additional step, independent from the previous components, is conducted. This step employs the Hungarian algorithm <cit.> to determine the optimal correspondence between the cluster assignments learned by the CGRclust and the actual taxonomic cluster labels. Subsequently, it evaluates the accuracy of the CGRClust predictions. §.§ Chaos Game Representation (CGR) of DNA sequences In the remainder of this paper, the DNA alphabet Δ is the set {A, C, G, T} corresponding to the four different nucleotides, Adenine, Cytosine, Guanine, and Thymine, the building blocks that form a DNA sequence. A CGR square is a square with vertices (corners) in the set V = {(1, 1), (1, -1), (-1, -1), (-1, 1) }. The corners of the CGR square are labelled as follows: the bottom left corner is labelled by A, the top left corner is labelled by C, the top right corner is labelled by G, and the bottom right corner labelled by T. Formally, the labelling function l: Δ→ V is defined as l(A) = (-1, -1), l(C) = (-1, 1), l(G) = (1, 1) and l(T) = (1, -1). Let s = a_1 a_2 … a_n, where a_i ∈Δ for all 1≤ i ≤ n, be a DNA sequence of length n. A CGR representation X_s of the sequence s is the set of points X_s = {p_0, p_1, ..,p_n}∈ℝ^2, situated inside the CGR square, whose coordinates are defined recursively by p_0 = (x_0, y_0)=(0,0) and p_i = p_i-1 + l(a_i)/2, for all 1 ≤ i ≤ n. The points plotted in the CGR correspond to nucleotide occurrences in the sequence, and the last plotted point of a CGR could in theory (at infinite resolution) be used to recover the original DNA sequence <cit.>. Frequency CGR (FCGR) is a quantified variant of CGR: An FCGR of resolution k is a 2^k × 2^k grayscale image wherein the intensity of each pixel directly corresponds to the frequency of its corresponding k-mer. FCGR is a compressed representation of DNA sequences with the compression degree indicated by the resolution k. It is obtained by dividing the CGR image into smaller, equally-sized squares and counting the number of points (or the frequency) within each square. Since FCGR smoothes the data in a grayscale image, rather than plotting each point individually, it is inherently less noisy than the traditional CGR. Similar to CGR, FCGR can identify over- and under-representation of patterns (specific arrangements or sequences of nucleotides) in DNA sequences and, thereby, can be used to determine the degree of identity between the DNA sequences of different species <cit.>. For a formal definition of FCGR at resolution k the reader is referred to Supplementary Material 1. Figure <ref> illustrates some examples of FCGRs at resolution k=8 (selected for visualization purposes) of real genomic DNA sequences, side by side with FCGRs of computer-generated DNA sequences. §.§ DNA data augmentation: Mimic sequences Data augmentation plays a critical role in contrastive clustering by significantly enhancing the model's ability to learn invariant representations from limited data. By adding different types of changes to the training data (thereby generating positive pairs), data augmentation helps the model to focus on the key features that define each cluster, avoiding the trap of fitting too closely to random noise or unimportant details. Consequently, CGRclust is based on constructing positive pairs and negative pairs through data augmentations. A pair of positive data points is a pair of mimic sequences, that are considered to be similar or related in some meaningful way (e.g. belonging to the same cluster), while a pair of negative data points is a pair of sequences that are considered to be dissimilar. We adapted a similar approach to <cit.>, and used an effective augmentation strategy by mixing weak and strong transformations as it previously showed superior performance on both image and text data when combined with TCL. For each DNA sequence input s_i, we define transformations t and t' as follows: t and t' are functions from the domain of DNA sequences to the set of augmented DNA sequences, with t applying a set of transformations from an augmentation family T, and t' applying a set of transformations from an augmentation family T'. These transformations are designed to modify the input sequence s_i in distinct ways, generating a positive pair represented as (s̃_2i-1,s̃_2i), where s̃_2i-1 = t(s_i) and s̃_2i = t^'(s_i). Note that direct image transformations traditionally used in computer vision for data augmentation (image flipping, cropping, or rotation), if applied to CGR/FCGR images, do not correspond to biologically meaningful or minor changes in the original DNA sequence. Indeed, such transformations could result in drastic and non-intuitive sequence changes, since the CGR/FCGR representations depend on the sequence's nucleotide order and composition. Thus, in CGRclust we opted to modify raw DNA sequences to create mimic sequences. This approach ensures that any resulting image alterations are meaningful, and mirror potential natural genetic variations in sequence composition. In CGRclust pipeline, data augmentations were implemented through functions t and t', belonging to the augmentation families T (weak augmentations) and T' (strong augmentations) respectively. Two types of data augmentation were explored, mutation and fragmentation. Both mutation and fragmentation of a DNA sequence, when appropriately applied, can alter the sequence while still maintaining patterns within its FCGR that are very similar (but not identical) to the FCGR of the original DNA sequence. Mutation, denoted by mutate(μ) has a mutation rate μ as parameter, and performs two types of substitution mutations (transitions and transversions) on the original DNA sequence. The probability of transitions is defined as being μ while the probability of transversions is 0.5 * μ, as the mutational hypothesis holds that the transition mutation rates are higher than the transversion rates in practice <cit.>. Fragmentation, denoted by frag(len), has the length len of the desired fragment as parameter. Given a DNA sequence of length n as input, fragmentation outputs a random fragment of length len of the input sequence (len ≤ n). In each computational experiment, the augmentation functions t and t' can be either mutation or a fragmentation. If the selected augmentation function is mutation, then t is the function mutate(μ_1) (weak), and t' is the function mutate(μ_2) (strong), where μ_1 <μ_2. Similarly, if the selected augmentation function is fragmentation, then the function t is frag(len_1) (weak), while t' is the function frag(len_2) (strong), where len_2 < len_1. To evaluate the impact of different data augmentation strategies on CGRclust, both mutation and fragmentation were explored, each with different values for their respective parameters. Details on these computational experiments can be found in Supplementary Material 2. The final findings suggest that mutation outperforms fragmentation as a data augmentation function, and its optimal parameters were empirically determined to be μ_1 = 10^-4 for the weak augmentation, and μ_2 = 10^-2 for the strong augmentation. Thus, mutation with these parameters was used as the default data augmentation and parameters for all computational experiments in this study. Given the constructed pairs, a shared backbone f(·) is used to extract features h from the augmented samples (mimic sequences) through h_2i-1 = f(X_s̃_2i-1) and h_2i = f(X_s̃_2i). To extract the important features of FCGR images, the backbone model was used to convert the two-dimensional input FCGRs into one-dimensional embeddings. Details about the backbone model used to process FCGR can be found in section 2.6 (Backbone model architecture). §.§ Twin contrastive learning (TCL) Inspired by <cit.>, during the training phase, the backbone, ICH, and CCH undergo joint optimization based on the following twin contrastive loss function: L_train = α L_ins + (1 - α) L_clu Here, L_ins denotes the instance-level contrastive loss computed via ICH, to increase the similarity between positive pairs and decrease it between negative pairs. Meanwhile, L_clu represents the cluster-level contrastive loss, determined through CCH, focusing on refining the pairwise similarities of cluster representations between weak and strong data augmentations. α represents a weighting parameter that balances the contributions of the instance-level contrastive loss (L_ins) and the cluster-level contrastive loss (L_clu) in the overall training loss (L_train). The parameter α controls the relative importance of the two components during optimization. To determine its optimal value, we tested different values for this hyperparameter and it was empirically determined that the value of 0.7 for α consistently delivered either the highest or close to the highest accuracy. Furthermore, it was observed that values within the range of 0.5 to 0.8 generally yielded superior outcomes, suggesting a robust zone of performance for α across different data conditions. For additional details, the reader is referred to Supplementary Material 2. Optimal clustering would classify instance pairs within the same class as positive and those across classes as negative. Yet, in the absence of predefined labels, we adapt by forming mimic sequence instance pairs via data augmentations. Given a batch size of N, we subject each DNA sequence, s_i, to two variants of data augmentations, generating 2N augmented samples expressed as s̃_1, s̃_2,...,s̃_2i-1, s̃_2i,...s̃_2N. Before employing ICH and CCH, we map features into two different subspaces using two-layer nonlinear Multilayer Perceptrons (MLPs), symbolized as g_I(·) and g_C(·), respectively. The InfoNCE loss <cit.>, which includes a computational parameter so-called “temperature parameter” (τ) to scale the contrastive loss, is applied to fine-tune both contrastive mechanisms. A comprehensive hyperparameter optimization of the twin deep clustering model focused on the instance- and cluster-level temperature parameters (τ_I and τ_C) within the ICH and CCH was conducted. Examining different values for each temperature parameter in the range [0.1, 1], it was empirically determined that τ_I=0.1 and τ_C=1.0 consistently yield relatively high accuracy across all datasets. This advancement aligns with the hypothesis that a lower τ_I encourages individual instance differentiation, aligning with the ICH's goal, while a higher τ_C enhances group discrimination, mirroring the CCH's objective <cit.>. While a confidence-based boosting strategy, which involves iterative adjustments to the learning process based on model prediction confidence, yielded a slight enhancement in the clustering outcomes of <cit.>, no significant improvement was observed for FCGR clustering. Therefore, we opted against incorporating this step to maintain pipeline simplicity and efficiency. For additional information about TCL please see Supplementary Material 3 and <cit.>. §.§ Backbone model architecture The augmented (mimic) DNA sequence pairs of FCGRs (X_s̃_2i-1, X_s̃_2i) serve as inputs for training multiple independent instances of a backbone model, ICH, and CCH. Given that the genomic datasets we are working with are notably smaller in scale compared to those typically encountered in computer vision, we found that common architectures such as ResNet34 and ResNet50, which have demonstrated efficacy in various visual tasks, were not well-suited as backbone models for genomic datasets. Therefore, we opted for a simpler yet versatile architecture that is better suited for clustering FCGRs of DNA sequences. The backbone model architecture, as shown in Figure <ref>, is composed of a single convolutional block featuring two convolutional layers. Each convolutional layer employs a kernel size of 7, a stride of 2, and a padding of 1. Following each convolutional layer is a Rectified Linear Unit (ReLU) activation function and a batch normalization layer for data normalization prior to being passed to the subsequent layer. Subsequently, the output of the final batch normalization layer undergoes max pooling with a kernel size of 2 to downsample the data across its spatial dimension by selecting the maximum value within each 2x2 window. Lastly, to transform the multidimensional input into a one-dimensional embedding, a flattening layer is applied, followed by a linear layer configured to match the desired output dimension. §.§ Majority Voting Scheme The integration of ensemble learning, particularly through majority voting, has significantly improved the accuracy of genomic sequence classification, as demonstrated in <cit.>. Majority voting, or hard voting, relies on the most frequent prediction across models, while soft voting considers the probability distributions of outcomes, often yielding higher precision. To optimize the performance of CGRclust, we employed five instances of the backbone model along with instance- and cluster-level contrastive heads. Each model copy was initialized randomly with distinct random seeds. Both soft and hard voting applied to CGRclust reduce variance due to random initialization and enhance model convergence thereby boosting the robustness and reliability of clustering predictions. Supplementary Material 2 discusses the impact of majority voting on clustering the Group 1 dataset. Although both voting methods enhanced CGRclust's performance, soft voting showed a slightly higher improvement. Consequently, we adopted soft voting as our default method. This approach integrates classifiers' certainty levels into the final prediction, thus yielding more reliable and potentially more accurate results. §.§ Experimental settings and implementation Throughout the training process, all CGRclust's hyperparameters remained constant and consistent across all tests, having been empirically chosen to achieve optimal performance. Prior to input into the network, all FCGRs underwent normalization. This process involved first standardizing each FCGR matrix's value by the min-max normalization to scale the features to the range of [0, 1], thus mitigating the impact of sequence length on pixel intensity. Subsequently, the FCGR matrices were normalized by Z-score normalization to scale features so that they have the properties of a standard normal distribution with a mean of 0 and a standard deviation of 1. This normalization enhanced the stability and convergence of the model. We utilized the Adam optimizer <cit.> with an initial learning rate set to 7e-5 and a weight decay of 1e-4 to jointly optimize both contrastive heads and the backbone model. In our observations, the implementation of the scheduler did not yield significant improvements. Furthermore, the selection of batch size, empirically set at 512, is a critical factor during training. This importance stems from the batch-wise operation of the unsupervised learning process, which is essential for determining the output distribution. Inadequate batch sizes may fail to accurately represent the true data distribution, resulting in the dominance of the entropy term in the loss function and potentially leading to suboptimal solutions. The dimensionality of ICH was determined empirically to be 128, aiming to preserve discriminative information within the data. The dimensionality of CCH was determined by the target cluster number. For benchmarking CGRclust's performance against state-of-the-art methods in DNA sequence clustering, we chose three recent alignment-free clustering methods noted for their effectiveness in clustering a variety of genomic datasets: DeLUCS <cit.>, iDeLUCS <cit.>, and MeShClust v3.0 <cit.>. For both DeLUCS and iDeLUCS, we applied the default hyperparameters, and the accuracies presented in the results section are based on these settings. MeShClust v3.0, a density-based clustering tool, inherently does not allow the pre-definition of cluster numbers. Consequently, besides the automatic selection of identity thresholds—which often leads to a discrepancy between the expected and actual cluster counts—we tested several identity score thresholds to select an optimal value that resulted in the desired number of clusters for each dataset. The optimal threshold values for each of the thirteen real datasets tested are detailed in Supplementary Material 4. CGRclust's pipeline is fully implemented in Python 3.10, and the source code is publicly available in the GitHub repository <https://github.com/fatemehalipour/CGRclust>. All tests with CGRclust and DeLUCS were conducted on a node within the Béluga cluster at Compute Canada, which features dual Intel Gold 6148 Skylake CPUs @ 2.4 GHz, 186 GB RAM, and an NVIDIA Tesla V100 SXM2 GPU with 16 GB of memory. Following <cit.> authors' recommendation, iDeLUCS was executed on Google Colab using an NVIDIA Tesla T4 GPU with 16 GB of memory. § RESULTS §.§ Qualitative performance of twin contrastive learning A qualitative analysis was first employed to assess the effectiveness of instance-level and cluster-level TCL, as implemented in CGRclust for clustering mtDNA sequences in Test 1. The dynamic learning process during the training phase is shown in Figure <ref>, illustrating how the model develops discriminative representations and accurately determines cluster assignments. This progression is documented across epochs and displayed at five timestamps. In Figure <ref>, the total number of clusters is established at three, corresponding to the points of a triangle, where each point signifies a taxonomic cluster. The placement of each point is derived from its three-dimensional probability vector, and different colors indicate the three ground truth taxonomic labels in Test 1. At the beginning, sequences are located at the triangle's center, reflecting an equal chance of being assigned to any of the three clusters. As training proceeds, the model increasingly assigns sequences to appropriate clusters, moving similar sequences closer to their respective vertex/cluster with greater probability. Notably, sequences that are assigned the same probability vectors will have their points overlap. §.§ Quantitative performance analysis and comparison with other methods In this section we analyze the performance of CGRclust and compare it with three other established clustering methods for DNA sequences, DeLUCS <cit.>, iDeLUCS <cit.>, and MeShClust v3.0 <cit.> (with both manual and automatic selection of the identity score threshold). Note that the ground truth labels are used post-hoc and for evaluation purposes only, and they were not utilized during the clustering process. Table <ref> presents a summary of the clustering accuracies for the Group 1 dataset described in Table <ref> (Cypriniformes mtDNA) across Tests 1-4. The reader is referred to Supplementary Material 5 for the confidence intervals of the CGRclust clustering accuracies of all clustering tests. The accuracies of CGRclust were achieved using the default hyperparameters over 150 epochs. As Table <ref>, and Table S5.1 in Supplementary Material 5 show, CGRclust consistently achieves comparable (within the confidence interval), or the highest accuracy across all four taxonomic levels. Specifically, CGRclust outperforms DeLUCS by 3.21% to 12.95% across different tests. In contrast to the generally superior performance of CGRclust, iDeLUCS shows competitive results in certain scenarios. Specifically, it achieves the highest accuracy among all methods at the suborder to family level (92.06%), comparable with CGRclust (within the confidence interval). This indicates that iDeLUCS has particular strengths in clustering mtDNA datasets at some specific taxonomic levels. However, at other taxonomic levels, iDeLUCS's performance generally is lower than both CGRclust and DeLUCS, suggesting that its clustering efficacy may vary depending on the nature and extent of sequence variation at a particular taxonomic level, and the characteristics of the dataset being analyzed. Lastly, CGRclust consistently outperforms both the manual and automated versions of MeShClust v3.0, by a large margin (up to 75.08%). Table <ref> summarizes the accuracies of clustering the five viral datasets in the Group 2 dataset described in Table <ref> (viral whole genomes), across Tests 5-9. For the clustering of the astrovirus genomes (Tests 5 and 6), the clustering is at the family to genus level, while for the dengue virus, HCV, and HIV-1 genomes the clustering is performed from the species to the virus subtype level. CGRclust consistently outperforms the other three clustering methods, demonstrating its robustness and accuracy in the context of virus mutagenesis and evolution. In Test 5, using an unbalanced astrovirus dataset, CGRclust surpasses DeLUCS and iDeLUCS by 15.06%, and outperforming MeShClust-manual by 25.25%. The results demonstrate CGRclust's superior performance in challenging clustering tasks, e.g., characterized by dataset imbalance, a condition where other methods —DeLUCS, iDeLUCS, and MeShClust v3.0— had a poor performance. In Test 6, which featured a cluster-balanced astrovirus dataset, the accuracy of both CGRclust and DeLUCS improved, while the accuracy of iDeLUCS remained relatively unchanged. In the dengue virus genomes dataset (Test 7), CGRclust, along with DeLUCS and MeShClust-manual among the compared methods, achieved perfect accuracy (100%). For the HCV genome dataset (Test 8), CGRclust achieved an accuracy of 85.79%, surpassing all compared methods by a margin of 1.16% to 8.95%. In the HIV-1 genomes dataset (Test 9), CGRclust achieves an accuracy that is 10.24% higher than DeLUCS and significantly surpasses both iDeLUCS and MeShClust-manual by 42.39% and 48.1%, respectively. Table <ref> displays the clustering accuracies for Tests 10-13 in the Group 3 dataset (mtDNA of Insects, Protists, and Fungi) from the study <cit.>, detailed in Table <ref>. Due to the complexities and specific characteristics of datasets in the Group 3 dataset, we observed an enhancement in CGRclust performance when the hyperparameter α was increased from its default value of 0.7 to 0.8, along with a greater emphasis on the instance-level contrastive head. This modification is evidenced in the third and fourth columns of Table <ref>, which display improvements in accuracy due to these adjustments. Generally, the change in the hyperparameter α led to increased accuracy across this group of datasets, with the most notable improvement seen in the Protist dataset in Test 11, where accuracy rose by 23.28%, almost bridging the gap with DeLUCS and surpassing iDeLUCS. However, in other datasets, this adjustment yielded minimal changes. This suggests that, in order to achieve optimal clustering outcomes, dataset-specific parameter optimization may be necessary to optimize different hyperparameters, including α. Further details on hyperparameter adjustment of α can be found in Section 2.5 (Twin contrastive learning (TCL)). In the comparison of clustering methods presented in Table <ref>, iDeLUCS exhibits superior performance over other methods in the Insects mtDNA dataset of Test 10. However, both DeLUCS and CGRclust demonstrate higher accuracies in the other three tests. Specifically, in Test 11 (Protists mtDNA), the accuracies of DeLUCS and CGRclust are superior to iDeLUCS by 8.10% and 5.50%, respectively. Furthermore, in Tests 12 and 13, both DeLUCS and CGRclust achieved higher accuracy in the Fungi classification at phylum and subphylum levels in comparison to iDeLUCS and MeShClust v3.0. The manual and automatic versions of MeShClust generally display lower accuracies, with the automatic version particularly underperforming the manual selection of identity threshold in three out of four datasets. It is important to note that these datasets pose significant clustering challenges due to variations in within-cluster similarities and different sequence lengths, which complicate the clustering process. While CGRclust did not always secure the top clustering accuracy across these datasets compared to other methods, the adjusted version of CGRclust demonstrated comparable clustering performance in the Insects (Test 10) and Protists (Test 11) datasets, as well as the Fungi dataset at the subphylum level (Test 13). Finally, for a direct comparison with MeShClust v3.0, Table <ref> summarizes the accuracies of clustering Group 4 dataset (the twelve synthetic datasets from <cit.> and described in Table <ref>), for all methods. In the Group 4 dataset, the terms “Medium-” and “Long-” in the dataset names indicate the sequence lengths. The numerical values ranging from 60 to 97 in the dataset names represent the identity score, a measure of sequence similarity. As this identity score increases, the sequences within a cluster become more similar, and this typically leads to enhanced performance of the clustering method. From the table, it is evident that CGRclust maintains a consistently high clustering accuracy, above 90%, across both “Medium” and “Long” dataset categories. Although it does not always achieve the highest accuracy compared to the other methods, CGRclust's performance is relatively close to DeLUCS and iDeLUCS. §.§ Summative Observations Overall, CGRclust exhibits versatility and robustness, consistently achieving high accuracy across twenty-five diverse datasets. CGRclust proved resilient to variations in dataset size, sequence length, and similarity, effectively handling the challenges posed by different genome types and taxonomic levels. Additionally, its performance in challenging scenarios, such as unbalanced datasets (e.g., Test 5), showcased its robust performance under different conditions. Its consistent performance highlights its superior clustering capabilities and scalability compared to other established methods like DeLUCS, iDeLUCS, and MeShClust v3.0. for DNA clustering. The training duration for the twenty-five datasets varied, with the shortest being 413 seconds (almost 7 minutes) in Test 4, and the longest being 10,371 seconds (almost 3 hours) in Test 18, dependent on the sequence count. Notably, as CGRclust converts variable-length DNA sequences into fixed-size FCGRs, the training time remains relatively unaffected by sequence length. For detailed information regarding the total training time across all datasets, the reader is referred to Supplementary Material 6. § DISCUSSION This study explored the novel application of twin contrastive clustering of DNA sequences using Chaos Game Representation (CGR) to the field of bioinformatics, particularly to the unsupervised clustering of DNA sequences. The findings from this study provide a new perspective on the potential for unsupervised clustering methods, originally designed for computer vision, to achieve high accuracy in DNA classification/clustering tasks, traditionally dominated by supervised learning. Implementing this methodology required developing a robust algorithm capable of handling diverse genomic data types, ensuring consistent performance across different datasets, including fish mitochondrial genomes (Cypriniformes order) at four taxonomic levels, as well as five different viral genomic datasets at genus or virus subtype levels. CGRclust achieved a high accuracy even when used with an unbalanced dataset in Test 5 (the accuracy of CGRclust was 85%, while the accuracies of the other methods were 15% to 34% lower), demonstrating its effectiveness in managing uneven data distributions. To ensure comprehensive evaluation and demonstrate the algorithm’s versatility, we expanded our dataset selection to include datasets previously analyzed by other studies (i.e. iDeLUCS <cit.> and MeShClust v3.0 <cit.>). This inclusion allowed us to perform direct comparisons and validate the effectiveness of CGRclust across diverse genomic datasets. CGRclust successfully clustered all twenty-five tested datasets, which varied in length from 664 bp to approximately 100 kbp, covering a diverse range of cluster counts and sequence numbers. One of the primary challenges was optimizing the contrastive learning process to improve both the efficiency and accuracy of the clustering results. An effective pipeline that integrates data augmentation (generation of the mimic sequences), feature extraction, and twin contrastive learning mechanisms successfully addressed this issue. It is important to note that, although this study focused on DNA sequences in the clustering experiments, CGRclust could also be applied to RNA analysis. This is due to the fact that both DNA and RNA are sequences made up of four “letters," that can each act as the label of one of the four corners of a CGR square. The applicability of our method has been primarily evaluated using the datasets mentioned, but further extensive validation across a wider range of DNA clustering tasks is necessary. This includes testing on DNA sequences longer than 100 kb, with a higher number of genome sequences per cluster, and a greater number of clusters, to confirm its general applicability. Beyond taxonomic clustering, this method could also be explored in other contexts such as exploring the impact of extreme environments on genomic signatures, and virus-host genomic signature similarity. Additionally, although CGRclust is more time-efficient compared to alignment-based methods and comparable to other clustering methods evaluated, it can still be time-consuming, especially when applied to large datasets. This could limit its practicality in settings where rapid processing of genomic data is required. This limitation comes from the substantial batch sizes required for effective contrastive learning. Moreover, finding a set of hyperparameters that is universally effective across different types of tests has proven to be challenging and may indeed be impossible given the diversity in genomic data and clustering objectives. In other words, each type of dataset may require individual finetuning of the model's hyperparameters in order to achieve optimal accuracy, and this can significantly increase the complexity and duration of the initial set-up. In light of these limitations, future work should focus on optimizing the computational efficiency of the method, exploring its scalability across diverse genomic datasets, and developing adaptive hyperparameter tuning mechanisms that can respond dynamically to the characteristics of the data being processed. § CONCLUSIONS This study introduces CGRclust, a novel twin contrastive clustering algorithm for the taxonomic clustering of unlabelled DNA sequences. CGRclust utilizes unsupervised machine learning to identify relevant and discriminative patterns in unlabelled, primary DNA sequence data, without relying on homology, sequence alignment, or any biological and taxonomic labelling. CGRclust achieves high clustering accuracies by combining the visual Chaos Game Representation of DNA sequences, with recent advancements in unsupervised learning for computer vision, namely twin contrastive learning and convolutional neural networks. It successfully clusters different datasets including full mitochondrial DNA genomes from fish, fungi, protists, and viral whole genomes across different taxonomic levels from phyla to intraspecific subtypes. Remarkably, CGRclust obtained high accuracy when encountering cluster imbalance in a dataset, showcasing its robustness with uneven data distributions. CGRclust achieves higher or comparable clustering accuracies compared with state-of-the-art existing unsupervised machine learning clustering methods, across all datasets tested. Notably, in 11 out of 13 real datasets, CGRclust achieved accuracy greater than 80%. In comparison, the DeLUCS algorithm surpassed this accuracy threshold in 7 out of 13 tests, iDeLUCS in only 5 tests, and MeShClust v3.0 only once. This demonstrates that CGRclust's performance is more consistently reliable than other methods. In particular, CGRclust performed especially well on viral datasets, where it consistently achieved the highest accuracies. § LIST OF ABBREVIATIONS CCH: Cluster-level Contrastive Head CGR: Chaos Game Representation CNN: Convolutional Neural Network FCGR: Frequency Chaos Game Representation HCV: hepatitis C virus HIV: human immunodeficiency virus ICH: Instance-level Contrastive Head KNN: K-Nearest Neighbor mtDNA: Mitochondrial DNA NCBI: National Center for Biotechnology Information ReLU: Rectified Linear Unit SVM: Support Vector Machine TCL: Twin Contrastive Learning § DECLARATIONS * Ethics approval and consent to participate Not applicable. * Consent for publication Not applicable. * Availability of data and materials The datasets generated and/or analyzed during the current study are all available in public repositories, and the links can be found in section 2.1 (Datsets) or associated literature. The CGRclust method developed for this study, along with all datasets used are available at <https://github.com/fatemehalipour/CGRclust>. * Competing interests The authors declare no competing interests. * Funding The authors declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by Natural Science and Engineering Research Council of Canada Grants RGPIN-2023-05256 to K.A.H. and RGPIN-2023-03663 to L.K. This research was enabled in part by support provided by Compute Canada RPP (Research Platforms Portals), https://www.computecanada.ca/, Grant 616 to K.A.H. and L.K. The funders had no role in the preparation of the manuscript. * Authors' contributions F.A., and L.K. conceived the study and wrote the manuscript. F.A. designed and performed the experiments. F.A., L.K., and K.A.H. conducted the data analysis and edited the manuscript, with K.A.H. contributing biological expertise. All authors read and approved the final manuscript. * Acknowledgements We thank Dr. R. Greg Thorn for his guidance on fungi taxonomy, Matheus Sanita Lima for guidance on protist taxonomy, Joseph Butler for proofreading the manuscript, and Pablo Millán Arias for his assistance with experiments with iDeLUCS. Supplementary information * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S1.pdfSupplementary Material 1: Frequency Chaos Game Representation * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S2.pdfSupplementary Material 2: CGRclust Methodological Optimization * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S3.pdfSupplementary Material 3: Twin Contrastive Learning * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S4.pdfSupplementary Material 4: Optimal Threshold Values Across Datasets for MeShClust v3.0 * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S5.pdfSupplementary Material 5: Confidence Interval for CGRclust Clustering Accuracies * https://github.com/fatemehalipour/CGRclust/blob/main/supplementary/S6.pdfSupplementary Material 6: CGRclust Training Times Across Different Tests
http://arxiv.org/abs/2407.03307v1
20240703174931
HoloHisto: End-to-end Gigapixel WSI Segmentation with 4K Resolution Sequential Tokenization
[ "Yucheng Tang", "Yufan He", "Vishwesh Nath", "Pengfeig Guo", "Ruining Deng", "Tianyuan Yao", "Quan Liu", "Can Cui", "Mengmeng Yin", "Ziyue Xu", "Holger Roth", "Daguang Xu", "Haichun Yang", "Yuankai Huo" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Anonymous Nvidia Vanderbilt University Vanderbilt University Medical Center HoloHisto: End-to-end Gigapixel WSI Segmentation with 4K Resolution Sequential Tokenization Yucheng Tang1* Yufan He1 Vishwesh Nath1 Pengfeig Guo1 Ruining Deng2 Tianyuan Yao2 Quan Liu2 Can Cui2 Mengmeng Yin3 Ziyue Xu1 Holger Roth1 Daguang Xu1 Haichun Yang3 Yuankai Huo2,3 July 8, 2024 ======================================================================================================================================================================================== § ABSTRACT In digital pathology, the traditional method for deep learning-based image segmentation typically involves a two-stage process: initially segmenting high-resolution whole slide images (WSI) into smaller patches (e.g., 256×256, 512×512, 1024×1024) and subsequently reconstructing them to their original scale. This method often struggles to capture the complex details and vast scope of WSIs. In this paper, we propose the holistic histopathology (HoloHisto) segmentation method to achieve end-to-end segmentation on gigapixel WSIs, whose maximum resolution is above 80,000×70,000 pixels. HoloHisto fundamentally shifts the paradigm of WSI segmentation to an end-to-end learning fashion with 1) a large (4K) resolution base patch for elevated visual information inclusion and efficient processing, and 2) a novel sequential tokenization mechanism to properly model the contextual relationships and efficiently model the rich information from the 4K input. To our best knowledge, HoloHisto presents the first holistic approach for gigapixel resolution WSI segmentation, supporting direct I/O of complete WSI and their corresponding gigapixel masks. Under the HoloHisto platform, we unveil a random 4K sampler that transcends ultra-high resolution, delivering 31 and 10 times more pixels than standard 2D and 3D patches, respectively, for advancing computational capabilities. To facilitate efficient 4K resolution dense prediction, we leverage sequential tokenization, utilizing a pre-trained image tokenizer to group image features into a discrete token grid. To assess the performance, our team curated a new kidney pathology image segmentation (KPIs) dataset with WSI-level glomeruli segmentation from whole mouse kidneys. From the results, HoloHisto-4K delivers remarkable performance gains over previous state-of-the-art models. § INTRODUCTION Digital pathology, a rapidly evolving field of medical vision research, has seen a transformative advancement with large vision models (LVMs) <cit.>. This significant advent created new demands for high-quality perception, which is crucial for microscopic (e.g., whole slide) image computing. However, current models are limited to the capability of dissecting and interpreting small pre-defined patches within images <cit.>. Typically, pre-processed tiles are confined to dimensions of 512×512 pixels or resampled to smaller dimensions of 224×224 defined by some predominating frameworks <cit.>, which restrict the scopes of tissue details that can be captured. The absence of rich information hinders the model's performance, particularly impacting tasks of detecting small objects and dense prediction <cit.>. For instance, the detection and segmentation of complete medullas under kidney WSI will degrade by more than 10% in DSC by using a height and width of 512 or completely fail without patching pre-defined ROI <cit.>. This scalability, especially when dealing with gigapixel whole slide image (WSI), remains a bottleneck in comprehensive and efficient computing analysis. To date, there are no established gold standard datasets for segmenting gigapixel WSIs, resulting in a lack of comprehensive end-to-end methods in this histopathology research. To include more information, a higher resolution is necessary as shown in Fig. <ref>. Nevertheless, modeling ultra-high definition (UHD) images (e.g. beyond 4K resolution) is extremely challenging <cit.>. High-resolution dense prediction requires a balance of strong context information extraction and model efficiency <cit.>. The computational cost of convolutional and transformer models, despite their significant benefits, has quadratic increases in the demand for computational resources <cit.>. This presents a critical scaling challenge for processing whole slide images. Therefore, high-quality image dense prediction requires models capable of understanding both global composition and locality of interactions at a compression rate. In this work, we propose the HoloHisto framework, debuting the holistic approach to redesign histopathology image segmentation with three key features: The Holistic Approach: We developed an end-to-end workflow for training and inferencing gigascale WSI, introducing a novel learning paradigm to the field of WSI analysis.. HoloHisto is designed to handle inputs and outputs of any size, regardless of whether they are (WSIs) or smaller patches. By leveraging cuCIM, our dataloader facilitates real-time reading of WSIs at various magnification levels and supports random foreground patching, tiling, or augmentation, enhancing the flexibility and efficiency of our approach. Our approach is capable of dynamically creating datasets online from one or multiple WSIs, potentially comprising an unlimited number of images during training. This approach does not depend on pre-defined cropping strategies, offering a more flexible and scalable solution for training models on large-scale datasets. In the inference stage, it can generate the corresponding gigapixel output. Architecture: We design an efficient backbone tailored for segmenting UHD images. First, we employ the sequential tokenizer for learning discrete visual tokens from perceptually rich constituents, streamlining towards 4K resolution dense prediction. Second, to model the long discrete tokens from these UHD images, we propose to use a two-stage ViT architecture that incorporates multi-scale attention <cit.>, which uses ReLU linear attention instead of the inefficient Softmax attention. Data: As a significant effort to improve gigapixel WSI computing, our pathologists addressed the critical gap in the availability of imaging data. We present Kidney Pathology Image Segmentation (KPIS), the dataset that facilitates the diagnosis and treatment of chronic kidney disease (CKD). Annotations are performed at the WSI level, serving as a foundation benchmark for developing cutting-edge image segmentation technologies. In summary, this paper explores a new learning paradigm of WSI segmentation: 1) HoloHisto framework that is capable of paralleling tile processing with direct WSI I/O, 2) scalable segmentation backbone with sequential tokenizer for ultra-high resolution images, and 3) gigapixel WSI annotation dataset as a foundational benchmark. § RELATED WORKS Pathology Segmentation. Recent advances in deep learning with CNN and transformers <cit.> achieve significant improvement in the field of pathology segmentation. Several works were proposed to address the challenges of microscopic imaging data, including H&E stained pathology images, fluoresce data, or other cell imaging modalities <cit.>. Numerous datasets for cell and tissue segmentation, including MoNuSeg <cit.> and NEPTUNE <cit.>, are available for identifying a variety of glomerular structures. In addition, instance segmentation is developed in the general cell imaging domain <cit.>. However, most current approaches focus on analyzing local tiles at a uniform magnification level, including nuclei, glomeruli, or tubules. This results in a notable gap in the segmentation of disease-related regions across entire whole slide images. Despite limited exploration or established efficacy in the field, we introduce a segmentation dataset and methodology designed for comprehensive WSI segmentation. Foundation Vision Models. Inspired by the achievement of large language models (LLMs) <cit.>, many endeavors <cit.> have been made to develop foundation vision models. With the development of transformer or state space models <cit.>, sequence modeling became the de facto way for modeling visual sentences <cit.>, which enabled the uniform modeling of various vision tasks. In this work, we explore large vision models (LVM) for digital pathology with two key features: (1) a pre-trained vector quantized generative adversarial networks (VQGAN) <cit.> that enables scalable tokenization for the ultra-high-resolution image at a compression rate; (2) an efficient multi-scale attention module for long sequence representation learning. § APPROACH In this work, we propose a holistic framework for segmenting gigapixel WSI. In addition, to model ultra-high resolution representation for dense prediction, we propose a model architecture for high-quality perception learning: 1) use the sequence tokenization for learning 4K visual parts at compression scale; 2) train ViT blocks with linear multi-scale attention. We summarize our approach in Fig <ref>. §.§ Sequence Tokenization To enable scalable modeling of ultra-high-resolution images while circumventing the quadratic increase in complexity associated with the scan-line order of patches, a discrete approach is essential. This method should efficiently encode visual elements and enable the sampling of high-quality perceptual representations from a probabilistic distribution. Inspired by neural discrete representation learning <cit.> and Vector Quantised (VQGAN) <cit.>, we employ an image tokenizer. This tokenizer maps input images to a semantic, discrete token codebook through a quantization layer. This technique captures the rich constituents of images, effectively reducing the expression length of original 4K resolution images. Consequently, it enables efficient modeling of global interrelations. Let the given UHD input be denoted by x, which exists in the space ℝ^H' × W' × 3. This image can be decomposed into a grid of codebook vectors z_enc, within the domain ℝ^h' × w' × d_z. Here, d_z represents the number of dimensions for each code. We approximate a given image x by x̂ = G(z_q). To obtain z_q, we start with the encoding ẑ = E(x), which resides in the space ℝ^h' × w' × d_z. Following this, we apply an element-wise quantization q(·) to each spatial code ẑ_ij within ℝ^n_z, aligning it with its nearest entry z_k. The process is formulated as: z_q = q(ẑ) := z_k ∈ Zminẑ_ij - z_k where z_q ∈ℝ^h' × w' × d_z §.§ Linear Multi-Scale Attention High-resolution dense prediction models require strong representation learning capability with good efficiency. Instead of widely used Softmax attention <cit.>, ReLU attention<cit.> provides linear complexity, which offers the flexibility of multi-scale modules for high-resolution dense prediction. Following the efficientViT <cit.> design, we make transformer blocks consisting of 2-stage multi-scale ReLU attention and FNN layers. The 3 hierarchical multi-scale ReLU attention can be expressed as: A_i = ReLU(Q_i) (∑_j=1^NReLU(K_i)^T V_j)/ReLU(Q_i) (∑_j=1^NReLU(K_j)^T) The calculations for the terms (∑_j=1^Nmax(0, K_j)^T V_j) and (∑_j=1^Nmax(0, K_j)^T) need to be performed only once. §.§ HoloHisto: End-to-end framework The complete pipeline of training and inference is demonstrated in Fig <ref>. Training Paradigm. In prior studies <cit.>, pathology image training has been performed using pre-cropped patches of a fixed size over selected regions of interest (ROIs). This offline preprocessing, used before the training and inference phases for gigapixel images, results in the model repeatedly learning from the same patches in each epoch. In this work, we introduce a random sampling paradigm for digital pathology image loader based on the cuCIM[<https://developer.nvidia.com/cucim-for-image-io-processing>] multidimensional processing unit. During the training phase, a foreground mask is created using a thresholding technique. Subsequently, we randomly extract ROIs at a 4K resolution from the identified foreground areas. The dataloader then compiles a dataset from one or several whole slide images (WSIs). As the number of training epochs increases, the framework is capable of sampling a virtually "unlimited" number of patches from the WSIs. Inference with WSI. During the inference stage, HoloHisto is capable of processing the entire WSI. The dataloader seamlessly reads the designated magnification level and isolates the foreground regions through thresholding. Subsequently, HoloHisto performs the foreground tiling with or without overlap, and loads individual tiles into one-dimensional GPU buffers, then positions them correctly within a pre-allocated GPU array until predictions have been made for all tiles. Finally, the predicted masks for each 4K tile can be allocated back onto the WSI space. § EXPERIMENTS AND RESULTS §.§ Datasets Kidney Pathology Image Segmentation (KPIs). The KPIs challenge cohort includes 60 high-resolution WSIs of whole mouse kidneys derived from 20 rodents, including three CKD disease models in addition to normal kidneys. Tissues were stained with Periodic Acid-Schiff (PAS) and captured at 40× magnification using the Leica SCN400 Slide Scanner. This diverse cohort allows for comprehensive analysis across different CKD models and normal conditions, providing a rich dataset for advancing research in renal pathology image segmentation. These WSIs are derived from four different groups of mouse models, each representing a different condition or stage of CKD. More information about KPIS studies and annotations is in the supplementary material. NEPTUNE <cit.>. The public dataset consists of 1751 Region of Interest (ROI) images that are extracted from 459 Whole Slide Images (WSIs) from 125 patients diagnosed with Minimal Change Disease. These images underwent manual segmentation for six structurally normal pathological features. Each image is at 3000×3000 resolution. §.§ Experiments We conduct a comparative study of the proposed HoloHisto in two datasets: KPIS and the publicly available tissue segmentation NEPTUNE <cit.>. For the evaluation of KPIS dataset, we present comparisons among conventional tile-based segmentation frameworks and calculated metrics in each 4K patch. In addition, to show the effectiveness of the WSI-level segmentation, we compute the Dice score on the entire WSI foreground. We re-trained baseline models including U-Nets <cit.>, UNETR <cit.>, swinunetr-V2 <cit.>,SegFormer <cit.>, and SAM variants <cit.>. We choose these methods based on groups of CNN, transformer and foundational backbones. §.§ Ultra-high Resolution Analysis KPIS. Table <ref> shows the quantitative result for the binary segmentation task of high-resolution images in the KPIS dataset. We compared HoloHisto to various baselines including CNN, and Transformer-based methods. We evaluated the metrics in two formats: 1) calculate Dice scores under the 4K resolution patch, HoloHisto is trained and inferenced in 4K patch, baseline methods are performed in 1024×1024, which is the best scale for SAM and others. HoloHisto consistently outperforms state-of-the-art pathology segmentation backbone. Along with the ablation study on resolution, we observe the higher resolution patch dimensions, the larger margin is obtained from HoloHisto, indicating the effectiveness of the high-quality perception modeling brought by the tokenizer and efficientViT. In Table <ref>, we show the ablative experiments result of components design of sequence tokenizer and ReLU linear attention compared to linear projection and multi-head self-attention (MHSA) in vanilla ViT. NEPTUNE. We conducted additional experiments on the existing public dataset NETUNE. Among 6 different scales of tissues, HoloHisto surpasses baseline models consistently. HoloHisto experiments are performed in 3000×3000 at its largest resolution from the data source, baselines used 1024×1024 sliding window inference from their best training strategy. The Dice scores are reported in Table <ref>. §.§ End-to-end Prediction Comparison with pre-tiling In Table <ref>, the WSI handling section shows the results of using a 4K random sampler with cuCIM and MONAI dataloader versus the tiling strategy, we observe a larger margin of improvement using the end-to-end framework. The visualization of complete WSI is shown in Fig. <ref> right panel. § DISCUSSION AND CONCLUSION This work tackles the fundamental task of segmenting histopathology images, a task that formerly relied on complex pipelines and was restricted to the analysis of small patches. We propose a holistic approach to segment gigapixel images with direct WSI I/O. To model the ultra-high resolution images within loaded WSI, we propose to use a sequential tokenizer, which encodes images as a composition of perception parts and thereby avoids the quadratically increased complexity. In addition, we evaluate the linear ReLU multi-scale attention instead of the Softmax attention for 4K UHD image tokens. In experiments, we exhibit the first WSI-level segmentation via a 4K image patch sampler and show the effectiveness and capability of HoloHisto-4K by outperforming state-of-the-art approaches. Towards the development of cutting-edge computational research, we also provide the gold-standard pathologist annotated dataset as a WSI segmentation benchmark. Limitation. It is important to note that we employed the natural image pre-trained sequence tokenizer, where the learned codebook is not for histopathology images. It is still rather challenging to achieve pathology LVM, limiting the model performance and application to WSI analysis. Therefore, we will continue to explore generalist models for pathology vision tasks. splncs04
http://arxiv.org/abs/2407.02103v1
20240702094253
Rossby wave instability in weakly ionized protoplanetary disks. I. azimuthal or vertical B-fields
[ "Can Cui", "Ashutosh Tripathi", "Cong Yu", "Min-Kai Lin", "Andrew Youdin" ]
astro-ph.EP
[ "astro-ph.EP" ]
]Rossby wave instability in weakly ionized protoplanetary disks. I. azimuthal or vertical B-fields ]Can Cui^1,2mailto:can.cui@astro.utoronto.ca can.cui@astro.utoronto.ca , Ashutosh Tripathi^2, Cong Yu^3,4, Min-Kai Lin^5,6 and Andrew Youdin^7,8 ^1Department of Astronomy and Astrophysics, University of Toronto, Toronto, ON M5S 3H4, Canada ^2DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK ^3School of Physics and Astronomy, Sun Yat-Sen University, Zhuhai 519082, China ^4CSST Science Center for the Guangdong-Hong Kong-Macau Greater Bay Area, Zhuhai 519082, China ^5Institute of Astronomy and Astrophysics, Academia Sinica,Taipei 10617, Taiwan, R.O.C. ^6Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan, R.O.C. ^7Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA ^8The Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721, USA 2023 firstpage–lastpage [ [ Received 2024; accepted 2024 ================================ § ABSTRACT Rossby wave instability (RWI) is considered the underlying mechanism to crescent-shaped azimuthal asymmetries, discovered in (sub-)millimeter dust continuum of many protoplanetary disks. Previous works on linear theory were conducted in the hydrodynamic limit. Nevertheless, protoplanetary disks are likely magnetized and weakly ionized. We examine the influence of magnetic fields and non-ideal magnetohydrodynamic (MHD) effects - namely, Ohmic resistivity, Hall drift, and ambipolar diffusion - on the RWI unstable modes. We perform radially global linear analyses, employing constant azimuthal (B_ϕ) or vertical (B_z) background magnetic fields. It is found that, in the ideal MHD regime, magnetism can either enhance or diminish RWI growth. Strong non-ideal MHD effects cause RWI growth rates to recover hydrodynamic results. The sign of Hall Elsässer number subtly complicates the results, and vertical wavenumbers generically diminish growth rates. hydrodynamics – MHD – methods: analytical – protoplanetary disks § INTRODUCTION Protoplanetary disks are composed of gas and dust orbiting pre-main-sequence stars <cit.>. They are the birth place of planets, where micron-sized dust grains coalesce and evolve into km-sized planetesimals, eventually giving rise to terrestrial planets or gas giants cores. Nevertheless, the growth of dust grains faces several barriers, including bouncing, fragmentation, and fast radial drift <cit.>. Rossby wave instability (RWI) is perhaps a promising mechanism to circumvent these barriers. Its non-linear state generates large, lopsided crescent-shaped vortices, which concentrate grains towards pressure maximum, fostering streaming instability and subsequent gravitational collapse <cit.>. These vortices manifest as azimuthal asymmetries observed in (sub-)millimeter dust continuum as well as CO rotational transition lines <cit.>. Notable examples include IRS 48 <cit.>, HD142527 <cit.>, and AB Aur <cit.>. The RWI is triggered by local extrema in the radial profile of vortensity, (ΣΩ/κ^2)S^2/Γ <cit.>. It gives exponential growth of non-axisymmetric modes (∝exp[imϕ], where m=1,2,...) on each side of the corotation radius. These unstable Rossby modes are confined between the inner and outer Lindblad resonances, where density waves are launched and propogate away from the resonances <cit.>. The eigenfunctions of Rossby waves manifest as anticyclonic vortices, characterized by vorticity (∇×δ𝐮) directed oppositely to the rotation of the disk, yielding a maximum perturbed pressure at the vortex core <cit.>. Extensions of the classic RWI incorporating self-gravity <cit.>, magnetic fields <cit.>, and dust-gas two-fluid drag <cit.> have been explored. In the context of protoplanetary disks, numerical simulations of the RWI have been commonly performed at the gap edges carved by a planet <cit.>, or at the dead zone edges of the magneto-rotational instability <cit.>, for which local vortensity extrema take place. These simulations elucidate that the non-linear saturation of the Rossby vortices is primarily governed by m=1 modes <cit.>. Furthermore, the long-term survival of Rossby vortices are examined under secondary instabilities <cit.>, thermal relaxation <cit.>, and dust-gas two-fluid framework <cit.>. While the RWI is inherently a hydrodynamic phenomenon, it is crucial to acknowledge the magnetized nature of protoplanetary disks, where magnetic fields likely originate from primordial molecular clouds <cit.>. Owing to the weak thermal ionization of passively heated protoplanetary disks and the modest non-thermal ionization by, for instance, the stellar FUV, EUV, X-rays, and cosmic rays <cit.>, the gas and magnetic fields are not perfectly coupled. The disk material may be weakly ionized, and three non-ideal MHD effects come into play: Ohmic resistivity, the Hall drift, and ambipolar diffusion <cit.>. At the same ionization fraction, Ohmic, Hall, and ambipolar diffusivities are respectively proportional to η_O∝const., η_H∝ B/ρ, and η_A∝ B^2/ρ^2. Consequently, Ohmic resistivity dominates in high-density regions near the inner disk and at the midplane, while ambipolar diffusion becomes prominent in the outer disk. Hall drift is most effective between these regions. Both numerical simulations and analytical theory have underscored the significance of non-ideal MHD effects in shaping disk structure and evolution, such as dead zone structures <cit.>, magnetized wind kinematics <cit.>, dust dynamics and distribution <cit.>, heating and disk temperature profiles <cit.>, and the formation of annular substructures <cit.>. Previously, the non-ideal MHD effects were incorporated in the linear studies of several instabilities applicable to protoplanetary disks, for example, the MRI <cit.> and the vertical shear instability <cit.>. These investigations unveiled significant alterations in the unstable modes induced by non-ideal MHD physics. In the context of MRI, both Ohmic resistivity and ambipolar diffusion tend to stabilize the instability, although under specific conditions, ambipolar diffusion shear instability can be excited <cit.>. The impact of Hall drift is twofold, based upon the sign of the Hall Elsässer number <cit.>. Concerning the vertical shear instability, strong ionization can substantially weaken the unstable modes, while subdued modes regain strength under weak ionizations <cit.>. To deepen our comprehension of RWI behavior in magnetized disks, investigations must encompass both ideal and non-ideal MHD physics. Previous studies have primarily focused on the ideal MHD regime. <cit.> utilized Lagrangian perturbation theory to probe the influence of toroidal magnetic fields on the RWI. Their findings revealed a continuous reduction in growth rates with increasing magnetization. Subsequently, <cit.> explored the impact of poloidal fields using a vertically integrated disk model. Interestingly, they observed a dichotomy in results: pure vertical fields typically diminish RWI growth rates, whereas the presence of radial fields tends to enhance them. In this work, we explore the RWI unstable modes in the ideal and non-ideal MHD limit, employing three-dimensional linear analysis with Eulerian perturbations. The paper is organized as follows. In Section <ref>, we introduce the governing dynamical equations and the perturbation equations that delineate our theoretical framework. Section <ref> elaborates on the numerical methodologies employed to solve the set of ordinary differential equations (ODEs) governing the magnetized RWI. In Section <ref>, we present numerical solutions obtained and elucidate the observed RWI behaviors. We summarize the main findings in Section <ref> and compare the RWI linear growth to MRI in Section <ref>. § THEORY §.§ Dynamical equations The stability of a three-dimensional, thin, magnetized disk with background radial vortensity extrema is analyzed in cylindrical coordinates (r,ϕ,z). The gravitational potential is given by Φ=-GM_⋆/(r^2+z^2)^1/2, where M_⋆ is the mass of central star. Disk self-gravity is neglected. The governing equations for this compressible, magnetized disk in Gaussian units are the continuity, momentum, and entropy conservation equations, dρ/dt+ρ∇·v=0, dv/dt + 1/ρ∇[P+B^2/8π]+∇Φ - 1/4πρ(B·∇)B=0, dS/dt=0, where the material derivative is defined as d/dt≡/ t+v·∇, and S≡ P/ρ^Γ is the entropy of the disk matter. The induction equation is written as, B/ t-∇×(v×B-cE')=0. The non-ideal MHD terms manifest in the electric field of the rest fluid frame, E' = 4π/c^2[η_OJ+η_HJ×b-η_A(J×b)×b], where the unit vector of magnetic field is denoted by b=B/|B|, and Ohmic, Hall, ambipolar diffusivities are denoted by η_O, η_H, and η_A, respectively. The current density is J=c∇×B/4π. Using the divergence free condition ∇·B=0, the induction equation can be cast into dB/dt -(B·∇)v +(∇·v)B + c∇×E'=0. §.§ Equilibrium of the disk The equilibrium disk model is stationary (/ t=0), axisymmetric (/ϕ=0), and radially global. All background quantities are independent of z. The steady-state physical quantities are denoted by the subscript “0”. The equilibrium velocity field has only the azimuthal component, v_ϕ0=Ω_0 r. The necessary condition for Rossby wave instability is the presence of radial vortensity extrema. This can be achieved by setting up a Gaussian bump centered at r=r_0 in the density profile <cit.>, ρ_0/ρ_00 = 1+ (A-1)exp[-1/2(r-r_0/Δ r)^2], where ρ_00 is the background density profile without the Gaussian bump. Despite of the radially global nature of the disk model, ρ_00 is taken to be a constant for simplicity. Setting it to a power-law distribution will not qualitative alter the results as noted in <cit.>. To compute the background pressure P_0, we consider a barotropic flow, and hene the pressure is only a function of density P_0(ρ_0), expressed by P_0/P_0∗= [ρ_0/ρ_0∗]^Γ, where subscript “0∗” denotes background quantities evaluated at r_0, and Γ is the adiabatic index. The adiabatic sound speed is defined as c_s0≡(Γ P_0/ρ_0)^1/2. By specifying c_s0∗, we can obtain P_0∗ and subsequently P_0. Throughout the paper, we set GM = ρ_0 = r_0=1, Δ r/r_0=0.05, Γ=5/3, A=1.5, and the disk aspect ratio c_s0∗/v_ϕ0∗=0.06 at r_0. The pressure scale height H≈ c_s0∗/Ω_K0∗, where Ω_K0∗ is the Keplerian angular speed at r_0. We construct a simplified equilibrium solution for this study, setting B_0 a constant vector. In equilibrium, the radial momentum equation gives v_ϕ 0^2(r,z)/r=1/ρ_0 P_0(r)/ r + Φ(r,z)/ r. In the ideal MHD limit, the ϕ-component of induction equation is [B_r0/ r+B_z0/ z-B_r0/r] v_ϕ 0=0. It is immediately seen that B_ϕ0 is not involved in these equations, and hence is free to specify. By thin disk approximations, Φ(r,z) ≈Φ(r)=-GM_⋆/r, and v_ϕ 0(r,z)≈ v_ϕ 0(r). Then for B_r0=0, we are completely free to specify B_z0, and the constant equilibrium magnetic field is taken to be B_0=(0,B_ϕ0,B_z0). When considering non-ideal MHD physics to equilibrium solutions, only the induction equation (<ref>) is modified. The current density J vanishes for a pure B_z field, and eq (<ref>) is strictly satisfied. However, the existence of curvature terms from cylindrical coordinates render non-zero ∇× cE' for a pure B_ϕ field. Hence, our equilibrium magnetic field is only valid if we ignore the curvature terms in curl operator. The strengths of B_ϕ0 and B_z0 are parameterized by plasma β, defined as the ratio of gas pressure to magnetic pressure, β=8π P_0/B_0^2, and β >1 is commonly satisfied in protoplanetary disks. Finally, we quantify the strength of non-ideal MHD effects by their respective Elsässer numbers, Λ=v_A^2/η_O, Ha=v_A^2/η_H, Am=v_A^2/η_A, where the Alfvén velocity is v^2_A=B_0^2/4πρ, and is the Keplerian angular speed. §.§ Perturbations of the disk Consider small perturbations to eqs (<ref>)-(<ref>), such that v = v_0+δv(r,z,ϕ,t), B = B_0+δB(r,z,ϕ,t), ..., . We linearize these equations by considering Eulerian perturbations ∝ f(r)exp(ik_zz+imϕ-iω t), where k_z is the vertical wavenumber, m is the azimuthal mode number, and ω=ω_r+iγ is the mode frequency, where γ denotes the growth rate. We further define the Doppler-shifted wave frequency Δω=ω-mΩ, the azimuthal wavenumber k_ϕ=m/r, and the radial epicyclic frequency κ=[r^-3d(r^4Ω^2)/dr]^1/2. We now drop subscript “0” for the background quantities throughout rest of the paper. Our model encompasses eight perturbed quantities, δv,δB,δρ,δΨ, where δΨ = δ P/ρ, and δΨ/ r = 1/ρδ P/ r - 1/ρδρ/ rδΨ. We follow <cit.>'s and <cit.>'s original paper, and define radial length scales of entropy, pressure, and density variations as L_S ≡Γ/d ln S/dr, L_P ≡Γ/d ln P/dr, L_ρ≡1/d lnρ/dr. These length scales are related by 1/L_P = 1/L_S + 1/L_ρ. For a barotropic flow, the length scale of entropy approaches infinity, 1/L_S→ 0. To present the perturbation equations optimally, we separate them into two cases: k_z=0 (<ref>) and k_z≠ 0 (Appendix <ref>). We first show the set of linearized equations in the ideal MHD limit. In the non-ideal MHD limit, we split the equations into pure vertical and toroidal magnetic field regimes. §.§.§ ideal MHD We first derive the linearized equations in the ideal MHD limit. For continuity equation (<ref>) it is δ v_r/ r + [1/r+1/L_ρ]δ v_r + ik_ϕδ v_ϕ - iΔωδΨ/c_s^2 =0. The linearized momentum equations (<ref>) are iΔω δ v_r + 2Ωδ v_ϕ - δΨ/ r +1/4πρ[ik_ϕ B_ϕδ B_r-2B_ϕ/rδ B_ϕ -B_zδ B_z/ r-B_ϕδ B_ϕ/ r+B_ϕ^2/rδρ/ρ] =0, iΔωδ v_ϕ - κ^2/2Ωδ v_r - ik_ϕδΨ + 1/4πρ[B_ϕ/rδ B_r-ik_ϕ B_zδ B_z] = 0, iΔωδ v_z + ik_ϕ B_ϕ/4πρδ B_z =0. The linearized induction equations (<ref>) are iΔωδ B_r + ik_ϕ B_ϕδ v_r =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r - B_ϕδ v_r/ r = 0, iΔωδ B_z - B_z/rδ v_r - ik_ϕ B_zδ v_ϕ + ik_ϕ B_ϕδ v_z - B_zδ v_r/ r =0. Lastly, the entropy conservation (<ref>) yields iΔωδ P/P - iΔωΓδρ/ρ - Γ/L_Sδ v_r =0. The barotropic assumption gives 1/L_S→ 0, and eq (<ref>) simplifies to δΨ= c_s^2δρ/ρ. It is the barotropic fluid assumption that transforms the set of perturbation equations into standard eigenvalue problems (see <ref>). According to eq (<ref>), it is straightforward to see that the perturbed pressure is related to density by δ P=c_s^2δρ. §.§.§ non-ideal MHD limit: pure B_ϕ When non-ideal MHD effects apply, the continuity and entropy equations (eqs (<ref>) and (<ref>)) remain unchanged, while the other equations are modified. We now express the linearized equations for pure B_ϕ. The perturbed momentum equations are iΔωδ v_r + 2Ωδ v_ϕ - δΨ/ r + 1/4πρ[ik_ϕ B_ϕδ B_r - 2B_ϕ/rδ B_ϕ-B_ϕδ B_ϕ/ r+B_ϕ^2/rδρ/ρ] =0, iΔωδ v_ϕ - κ^2/2Ωδ v_r - ik_ϕδΨ+1/4πρ[B_ϕ/rδ B_r]=0, iΔωδ v_z + ik_ϕ B_ϕ/4πρδ B_z=0. The perturbed induction equations are iΔωδ B_r + ik_ϕ B_ϕδ v_r +[η_O+η_A][^2δ B_r/ r^2 + 1/rδ B_r/ r-k_ϕ^2δ B_r-δ B_r/r^2-2/rik_ϕδ B_ϕ] +η_H[k_ϕ^2 δ B_z] =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r - B_ϕδ v_r/ r +[η_O+η_A][^2δ B_ϕ/ r^2 + 1/rδ B_ϕ/ r-k_ϕ^2δ B_ϕ-δ B_ϕ/r^2+2/rik_ϕδ B_r ] +η_H[ik_ϕ(δ B_z/ r - δ B_z/r) ] =0, iΔωδ B_z + ik_ϕ B_ϕδ v_z +η_O[^2δ B_z/ r^2 + 1/rδ B_z/ r-k_ϕ^2δ B_z] +η_A[-k_ϕ^2δ B_z] +η_H[ik_ϕ(- δ B_ϕ/ r-δ B_ϕ/r + ik_ϕδ B_r) ] =0. In the limit of pure B_ϕ and k_z=0, all three non-ideal MHD effects are present. Ohmic resistivity and ambipolar diffusion exhibit identical terms in both radial and azimuthal components of the induction equation. We set δ B_z=δ v_z=0 because δ B_z and δ v_z solely appear in eqs (<ref>) and (<ref>). §.§.§ non-ideal MHD limit: pure B_z For a pure B_z field, the vertical component of the momentum equation yields δ v_z =0, while the remaining two linearized momentum equations are iΔωδ v_r + 2Ωδ v_ϕ - δΨ/ r -1/4πρ[B_zδ B_z/ r]=0, iΔωδ v_ϕ - κ^2/2Ωδ v_r - ik_ϕδΨ + 1/4πρ[-ik_ϕ B_zδ B_z]=0. The non-ideal MHD effects manifest in the induction equation (<ref>), iΔωδ B_r +η_O[^2δ B_r/ r^2 + 1/rδ B_r/ r-k_ϕ^2δ B_r-δ B_r/r^2-2/rik_ϕδ B_ϕ] =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r +η_O[^2δ B_ϕ/ r^2 + 1/rδ B_ϕ/ r-k_ϕ^2δ B_ϕ-δ B_ϕ/r^2+2/rik_ϕδ B_r ] =0, iΔωδ B_z +B_z[1/L_ρδ v_r-iΔω/c_s^2δΨ] +[η_O+η_A][^2δ B_z/ r^2 + 1/rδ B_z/ r-k_ϕ^2δ B_z] =0. In the limit of pure B_z and k_z=0, only Ohmic resistivity and ambipolar diffusion contribute to the linearized induction equations, but not Hall drift. Ambipolar diffusion manifests only in the vertical component of the induction equation, exhibiting identical terms to Ohmic resistivity. Furthermore, δ B_r and δ B_ϕ are involved only in eqs (<ref>) and (<ref>), and hence can be set as δ B_r=δ B_ϕ=0. § NUMERICAL METHODS We solve the linearized equations presented in <ref> numerically. Two numerical methods are employed to solve ODE eigenvalue value problems: the pseudospectral method (<ref>) and the finite difference method (<ref>). We find that the spectral method, via the Python package dedalus, outperforms the finite difference method. The finite difference method produced a number of spurious modes with oscillation frequencies w_r that closely resembled those of RWI modes, particularly when incorporating Hall drift. Consequently, we use the spectral method as the primary numerical approach for solving the ODE systems and compare its results to those obtained from the finite difference method for a selected range of parameters. The details of these two methods are provided below. §.§ Pseudospectral method Pseudospectral methods, also known as orthogonal collocation methods, approximate the solution of differential equations at selected collocation points, by a weighted sum of orthogonal basis functions, which are often chosen to be orthogonal polynomials up to a certain degree. Chebyshev polynomials of the first kind T_n, where n=0,1,2,...,N-1, are chosen in our problem, for which the eigenfunctions are expanded in. The radial domain, that spans r∈[0.4,1.6], is discretized into N Chebyshev collocation points. To minimize the interpolation error, these nodes are non-uniform and are selected as the roots of Nth degree Chebyshev polynomial T_N, which cluster near the ends of the domain <cit.>. The differential equations described in <ref> construct standard linear eigenvalue problems. Written compactly in a generalized matrix form, it is Ax⃗ = ℒx⃗ + ωℳx⃗ = 0, where ω is the eigenvalue, x⃗ = [δ⃗ ⃗v⃗_r,δ⃗ ⃗B⃗_r,δ⃗ρ⃗,...]^T is a vector of eigenfunctions with M perturbed quantities, and A, ℒ, ℳ are MN× MN sized matrices, with ℒ composed of linear operators. We employ dedalus[<https://dedalus-project.org/>], a general purpose spectral code for differential equations <cit.>, to solve the linear eigenvalue problem described above. We utilize the dense solver method by in dedalus, for which matrix A is converted to dense arrays, and the routine in Python is utilized to directly solve the eigenvalue problem. Generally, a numerical resolution of N=256 is adopted. Spurious solutions can arise due to the truncation of differential equations to finite-dimensional algebraic equations <cit.>. Most spurious modes are eliminated by increasing the numerical resolution by 1.5 times. By comparing solutions between low and high resolutions, we retain only the valid modes within a tolerance of 10^-6. §.§ Finite difference method In finite difference method, we approximate the ODEs by finite difference equations, a technique previously adopted in RWI literature by <cit.>. We discretize the ODEs on a grid with N=1001 points and uniform spacing h, spanning the domain r∈[0.4,1.6]. The system can be compactly expressed using matrices. The MN× MN sized matrix A, corresponding to the set of ODEs, is written as Ax⃗ = A_00 ... A_0,M-1 ... ... ... A_M-1,0 ... A_M-1,M-1x⃗=0. Matrix A consists of M× M submatrices A_ij, for i,j = 0,...,M-1. The value of M is set by the number of perturbed quantities contained in the system of ODEs. Each submatrix A_ij has a size of N× N. The vector of perturbed quantities is expressed as x⃗ = [δ⃗ ⃗v⃗_r,δ⃗ ⃗B⃗_r,δ⃗ρ⃗,...]^T, where each perturbed quantity, for example, δv⃗_r=δv⃗_r(r_k), is evaluated at grid points r_k, for k=0,...,N-1. The entries of A_ij are filled in by constructing the linear operators of the first derivative d/dr and the second derivative d^2/dr^2. We adopt the central differencing scheme to obtain the linear operators of the first derivative <cit.>, A_ij,n,n-1 = -1/2h, A_ij,n,n+1 = 1/2h, and the second derivative, A_ij,n,n-1 = 1/h^2, A_ij,n,n = -2/h^2, A_ij,n,n+1 = 1/h^2, for n=1,...,N-2. The differential equations described in <ref> form standard linear eigenvalue problems. The eigenvalue (ω) and the corresponding eigenfunction (perturbed quantities x⃗) are readily determined by utilizing routine in Python. §.§ Boundary conditions To accomodate the density waves, WKBJ boundary conditions are imposed, and the linear operator d/dr-ik_r=0 is applied to the boundary. In the finite difference method, the boundary conditions involve in the zeroth and (N-1) th row of A_ij. Using the forward differencing scheme for the outer boundary conditions and the backward differencing scheme for the inner boundary conditions, the entries of A_ij at boundaries can be written as A_ij,00 = -1/h, A_ij,01 = 1/h - ik_r, A_ij,N-1,N-2 = -1/h - ik_r, A_ij,N-1,N-2 = 1/h. To solve for k_r at the boundaries, away from the RWI region where the background quantities vary slowly over radius, we use WKBJ analysis by assuming δ v, δ B, δρ ... ∝exp(ik_rr). Substituting the derivatives by k_r, we obtain B(k_r,ω)y⃗=0 at the boundaries, where B is a matrix containing k_r and ω, and y⃗ = [δ v_r,δ B_r,δρ,...]^T. Both B and y⃗ are evaluated at r_0 and r_N-1. We take ω to be the value obtained from last β. This approach works if we gradually decrease β values in the domain of interest. Now the only unknown in matrix B is k_r, which is determined using the Newton-Raphson method by setting the determinant of B to zero, det B(k_r)=0. The implementation of this method utilizes the source code of as a reference. The iteration converges within five steps for a tolerance of 10^-10. We choose azimuthal mode number m=2,3,4,5 for the calculations presented in this work. For smaller or larger mode numbers, the growth rates can be relatively small and are sensitive to the grid resolution. We note that when m=1, the inner boundary condition differs from the WKB one described earlier. In the hydrodynamic limit, the inner boundary lies within the evanescent region, and the inner Lindblad resonance is absent. Instead, one can assume a power-law behavior in r and ensure regularity at r=0 <cit.>. § RESULTS The numerical solutions are presented for the ideal MHD limit (<ref>), Ohmic resistivity (<ref>), ambipolar diffusion (<ref>), and the Hall drift (<ref>), respectively. The analysis will be focused on the RWI growth rates, as the eigenfunctions are similar to those in the pure hydrodynamic limit; two Rossby waves are located on each side of the corotation radius, with density waves excited at the inner and outer Lindblad resonances, propagating away from the corotation radius <cit.>. Note that in the Figures shown below, we select and present only the fastest growing mode for a specific parameter, though a handful of slower growing modes also exist. §.§ Ideal MHD To analyze the numerical solutions in the ideal MHD limit, we begin with the pure B_ϕ field. The top panel of Figure <ref> depicts normalized RWI growth rates γ/Ω_K0 as a function of plasma β, for different azimuthal mode numbers m=2,3,4,5 at an infinite vertical wavelength, k_z=0. At β=100, the growth rates nearly recover the hydrodynamic RWI results, with the maximum growth rate observed at m=4 for the chosen set of parameters. As β exceeds 100, the curves begin to plateau. Conversely, as plasma β decreases, the growth rates diminish for all azimuthal mode numbers m investigated. This suggests that increased magnetization suppresses RWI growth. It can be interpreted as that the magnetic fields are perfectly coupled with gas in the ideal MHD limit, potentially hindering the free movement of perturbed gas. The top panel of Figure <ref> shows growth rates for different vertical wavenumbers k_z=0, 1/10H, 1/5H at m=4. It is evident that increasing vertical wavenumbers tend to reduce RWI growth rates across the range of β values explored. Figure <ref> displays the RWI growth rates in the ideal MHD limit for the pure B_z field. Unlike in the pure B_ϕ field, where growth rates monotonically decrease towards smaller β, different azimuthal mode numbers m exhibit distinct behaviors across the range of β (top panel). The parameter space of β is now extended down to 0.01, in order to provide ample coverage to observe the trend in growth rates. For relatively large m=5, the RWI growth rates decrease monotonically as β decreases, similar to pure B_ϕ model. Conversely, smaller m=2,3,4 show an increase in growth rates as β decreases, followed by a decline as β drops further. The peak of the growth rates occurs at higher β values for larger m. This is in agreement with <cit.>, who observed a peak in growth rate for m=4 around β≈ 0.1, as shown in their Figure 2. As β approaches 0.01, the growth rates asymptotically approach a stable value for each m investigated. When there is a finite vertical wavelength (k_z≠ 0), the growth rates diminish with higher k_z depicted in the top panel of Figure <ref>. §.§ Ohmic resistivity Next, we examine Ohmic resistive disks. The middle panel of Figure <ref> shows RWI growth rates as a function of the Ohmic Elsässer number Λ at β=2. As Λ→∞, the growth rates tend to approach those of ideal MHD limit. For example, at β≈2.212 the growth rates calculated for m=2,3,4,5 in the ideal MHD limit are ω/Ω_K0≈ 1.974+0.0554i, 2.963+0.0708i, 3.956+0.0726i, 4.949+0.0647i, respectively (top panel; Figure <ref>). Correspondingly, the growth rates calculated for resistive disks at Λ=100 are ω/Ω_K0≈ 1.973+0.0528i, 2.963+0.0666i, 3.956+0.0666i, 4.949+0.0526i. As Ohmic Elsässer number decreases, the growth rates for all m models gradually increase. At Λ=10^-2, they converge towards hydrodynamic results. For instance, at β=100, the growth rates for m=2,3,4,5 in the ideal MHD limit are ω/Ω_K0≈ 1.976+0.0892i, 2.966+0.118i, 3.957+0.131i, 4.950+0.127i, respectively (top panel; Figure <ref>). The corresponding growth rates computed for the resistive disks at Λ=0.01 are ω/Ω_K0≈ 1.971+0.0881i, 2.960+0.103i, 3.948+0.130i, 4.938+0.130i, respectively. The pure B_z disk follows the similar trend as the pure B_ϕ disk. As Λ approaches infinity, the growth rates resemble those in the ideal MHD limit. Specifically, the growth rates at β≈2.212 in the ideal MHD limit are ω/Ω_K0≈ 1.973+0.0979i, 2.963+0.126i, 3.956+0.136i, 4.949+0.128i, respectively (top panel; Figure <ref>). For comparison, at β=2 and Λ=100, the RWI growth for m=2,3,4,5 are, respectively, ω/Ω_K0≈ 1.973+0.0984i, 2.963+0.126i, 3.956+0.136i, 4.948+0.128i. As the Ohmic Elsässer number decreases, growth rates of different m models all converge towards hydrodynamic results. This can be readily seen by that at β=100, the growth rates for m=2,3,4,5 in the ideal MHD limit are ω/Ω_K0≈ 1.976+0.0903i, 2.966+0.120i, 3.958+0.133i, 4.950+0.129i, respectively (top panel; Figure <ref>). The corresponding growth rates in the resistive disks at Λ=0.01 and β=2 are ω/Ω_K0≈ 1.977+0.0878i, 2.960+0.118i, 3.963+0.122i, 4.928+0.139i, respectively. Among the explored azimuthal mode numbers, only the m=2 curve rises with Λ, while the others drop. This is due to the peak pattern observed for m=3, 4, 5 in the ideal MHD limit (top panel; Figure <ref>). The middle panels of Figure <ref> and Figure <ref> show variations in growth rates with Ohmic Elsässer number Λ at m=4 for different vertical wavenumbers k_z. It is evident that the growth rates decrease steadily with the vertical wavenumber k_z. §.§ ambipolar diffusion Ambipolar diffusion shares the exact perturbation equations to Ohmic resistivity if k_z=0 (<ref>). Consequently, results obtained for resistivity can directly apply to it. The curves obtained for Am overlap exactly with those obtained for Λ in the middle panels of Figure <ref> and Figure <ref>. Nevertheless, for non-zero k_z the induction equations differ between resistivity and ambipolar diffusion. In the pure B_ϕ model, the ϕ-component of the induction equation shares the same terms for resistivity and ambipolar diffusion, while the other two components differ. Despite these differences, the growth rates in the ambipolar diffusion limit closely approximate those in the resistivity limit when k_z=1/10H, 1/5H (middle panel; Figure <ref>). In the pure B_z model, we suspect the modes obtained are spurious and provide a detailed explanation in Appendix <ref>. §.§ The Hall drift Unlike resistivity or ambipolar diffusion, Hall physics is sensitive to rotation/radial shear <cit.>. This is evident by reversing the sign of B in the induction equation. In the limit of an axial field and wavenumber, the Hall Elsässer number is written by Ha=v_Az^2/v_H^2, where v_H^2≡Ω B_zc/(2π en_e) is the square of the Hall velocity. It follows that v_H^2, though squared, and Ha take negative (positive) values if Ω and B_z are oriented oppositely (parallel). With more general field geometries and wavenumbers, the sign of Ha depends on (Ω·k)(B·k) <cit.>. For a fixed direction of rotation, the toroidal field reverses its sign along with vertical field. We explore positive and negative Ha in pure B_ϕ and B_z disk models. In the case of pure B_ϕ and k_z=0, the Hall drift has a similar influence as resistivity and ambipolar diffusion (bottom panel; Figure <ref>). Moreover, the results obatined for Ha and -Ha overlap. As Ha/-Ha→∞, the growth rate resembles those of ideal MHD, while as Ha/-Ha→ 0, the growth rates revive and approach hydrodynamic results. For example, at β≈ 2 for m=2,3,4,5 the growth rates in the ideal MHD limit are provided in <ref>. In the Hall drift limit at Ha=100, the corresponding growth rates are ω/Ω_K0≈ 1.973+0.0526i, 2.963+0.0675i, 3.956+0.0683i, 4.949+0.0560i, respectively. At β=100 and m=2,3,4,5, the ideal MHD growth rates are also given in <ref>. In the Hall drift limit at Ha=0.01 and β=2, the corresponding growth rates are ω/Ω_K0≈ 1.971+0.0896i, 2.960+0.121i, 3.950+0.135i, 4.949+0.133i, respectively. The bottom panel of Figure <ref> shows growth rates at k_z≠ 0. The variation of vertical wavelengths does not significantly influence the RWI growth. The sign of Ha does not yield distinct values of γ when k_z≠ 0 either. In the case of pure B_z, the Hall terms only appear in the induction equations when the wavelength is finite (k_z≠ 0). As shown in bottom panel of Figure <ref>, when vertical wave number is very small, for example k_z=1/50H, the growth rates do not vary much across Ha/-Ha, as Hall terms vanish at k_z=0. Increasing k_z generically diminish the RWI growth. Furthermore, positive and negative Ha exhibit distinct behaviors; positive Ha results in a trough at Ha≈ 0.1, whereas negative Ha results in a peak at around the same location, slightly shifted towards larger |Ha|. § SUMMARY We studied the magnetized and lightly ionized RWI by Eulerian perturbations. The framework is three-dimensional and radially global. Ideal MHD and non-ideal MHD effects, including Ohmic resistivity, ambipolar diffusion, and the Hall drift are considered. Taking advantage of the barotropic fluid assumption, the perturbation equations form standard eigenvalue problems. The spectral method via dedalus excels in resolving the eigenmodes. Our results are summarized as follows. For a pure B_ϕ field: * When gas and magnetic fields are perfectly coupled, RWI growth rates increase with β. Strong magnetization tends to impede RWI. For all three non-ideal MHD effects, as Elsässer numbers approach infinity, the results resemble the ideal MHD limit; as Elsässer numbers approach zero, the results resemble the hydrodynamic limit. In the limits of ideal MHD, Ohmic resistivity, and ambipolar diffusion, non-zero vertical wavenumbers generically diminish RWI growth compared to k_z=0 limit. In Hall-dominated disks, non-zero wavenumbers do not significantly impact the results. The sign of Ha does not yield different results. For a pure B_z field: * In the ideal MHD limit, RWI growth rates can either increase or decrease with β, depending on the azimuthal mode number m. In the limit of resistivity, similar to the B_ϕ model, as Elsässer numbers approach infinity, the results resemble the ideal MHD limit. As Elsässer numbers approach zero, the results resemble the hydrodynamic limit. In the limit of ideal MHD and resistivity, similar to the B_ϕ model, vertical wavenumbers generically diminish RWI growth. Hall drift only appears when k_z≠ 0. The sign of Ha slightly complicates the growth rates. § DISCUSSION We compare the linear growth rates between RWI and MRI. We follow the framework, derivations, and analyses described in <cit.> for MRI. In most cases of it, we consider channel modes with k_x/k_z=0, a pure vertical field B_z and β_z, Keplerian rotation, and no vertical shear. We also consider azimuthal fields in the ambipolar diffusion dominated regime. We start with the simplest ideal MHD limit. Eq (29) in <cit.> shows the bi-quadratic dispersion relation, and the MRI growth rate s is found to be: 2s^2/Ω^2 = - (2k^2_zv^2_Az/Ω^2+1 ) + (16k^2_zv^2_Az/Ω^2+1 )^1/2, where v_Az=B_z/√(4πρ) is the vertical Alfvén velocity. The maximum growth rate s=3/4Ω occurs when k^2_zv^2_Az=15/16Ω^2 as expected. On the other hand, growth rates vanish when k^2_zv^2_Az=3Ω^2. In a realist protoplanetary disk, the vertical wavelength should be shorter than the pressure sclae height, or k_z>1/H. Therefore, the criteria for the existence of MRI modes is β_z>q_R^-1, where q_R=-lnΩ/ln R is the dimensionless orbital shear, and has q_R=3/2∼ O(1) for Keplerian rotation. The maximum growth can occur when β_z>32/15. Both criteria can be readily satisfied in protoplanetary disks, allowing linear MRI modes to easily surpass RWI modes. For resistivity dominated disks, the instability criterion for MRI channel modes is presented in eq (34) of <cit.>. Requiring the vertical wavelengths to be longer than the thickness of the disk yields β_z > q_R^-1(1+Λ^-2). If Λ→∞, the system is almost in the ideal MHD regime, and β_z>q_R^-1 is required for channel modes to emerge. Thus, MRI surpasses RWI as shown above. If Λ≪ 1, the system is strongly resistivity dominated, and β_z≳Λ^-2. In this regime, RWI modes resemble hydrodynamic results, and can surpass MRI. For a zero B_ϕ field, the instability criteria and physical behavior of ambipolar diffusion is very similar to the Ohmic regime. For a non-zero B_ϕ field, the MRI channel modes are diminished by small Am, for which regime ambipolar diffusion shear instability (ADSI) can emerge <cit.>. We start with the rather simple pure MRI channel modes. These modes exist when β_z > q_R^-1(1+Am^-2B^2/B_z^2). The presence of azimuthal fields contributes to stabilizing the MRI. In three-dimensional global numerical simulations incorporating ambipolar diffusion, a typical ratio of B_ϕ/B_z∼ O(10) has been observed <cit.>. Using this ratio, channel modes occur under conditions where β_z ≳ 10^4, 10^2, 1 for Am=0.1,1,10, respectively. Thereby, RWI may dominate over MRI for Am≲ 0.1 in a protoplanetary disk. On the other hand, ADSI becomes significant when channel modes are diminished. ADSI modes have |k_x/k_z|>0. Although these modes can always emerge for sufficiently large |k_x/k_z|, their growth rates might be rather small <cit.>. To investigate the dominance between MRI/ADSI and RWI, we compute the growth rates by the ambipolar dispersion relation shown in eq (31) of <cit.>. We utilize B_ϕ/B_z∼ O(10) and set a critical growth rate threshold at 0.01 (see middle panels of Figure <ref> and Figure <ref>). Under these conditions, no modes satisfy the growth rate criterion for Am<1, leading us to conclude that RWI surpasses MRI/ADSI in this regime. Lastly, we examine the Hall dominated regime. Positive Ha exhibits fast growth rates around ∼ O(0.75), stemming from a blend of MRI and Hall shear instability. There are three regimes associate with negative Ha. For Ha<-0.25, we are in the Hall modified and diffusive MRI regimes, for which growth rates are rather fast ∼ O(0.75). For Ha>-0.25, there is no instability possible, rendering it ideal for RWI modes to grow. § ACKNOWLEDGEMENTS CC acknowledges funding from Natural Sciences and Engineering Research Council of Canada and UK STFC grant ST/T00049X/1. AT acknowledges summer studentship from the Centre for Mathematical Sciences, University of Cambridge. CY is supported by the National SKA Program of China (grant 2022SKA0120101) and the National Natural Science Foundation of China (grants 11873103 and 12373071). MKL is supported by the National Science and Technology Council (grants 112-2112-M-001-064-, 113-2124-M-002-003-) and an Academia Sinica Career Development Award (AS-CDA-110-M06). § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras § LINEARIZED EQUATIONS FOR K_Z≠ 0 §.§ ideal MHD The linearized continuity equation is iΔω/c_s^2δΨ - δ v_r/ r - [1/r+1/L_ρ]δ v_r - ik_ϕδ v_ϕ - ik_zδ v_z =0. The linearized momentum equations are iΔωδ v_r + 2Ωδ v_ϕ - δΨ/ r +1/4πρ[i(k_ϕ B_ϕ + k_z B_z) δ B_r-2B_ϕ/rδ B_ϕ -B_zδ B_z/ r -B_ϕδ B_ϕ/ r+B_ϕ^2/rδρ/ρ] =0, iΔωδ v_ϕ - κ^2/2Ωδ v_r - ik_ϕδΨ + 1/4πρ[B_ϕ/rδ B_r-ik_ϕ B_zδ B_z + ik_zB_zδ B_ϕ] = 0, iΔωδ v_z - ik_zδΨ + 1/4πρ[ ik_ϕ B_ϕδ B_z - ik_zB_ϕδ B_ϕ] =0, The linearized induction equations are iΔωδ B_r + i(k_ϕ B_ϕ+k_zB_z)δ v_r =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r - B_ϕδ v_r/ r + ik_z[B_z δ v_ϕ-B_ϕδ v_z] = 0, iΔωδ B_z - B_z/rδ v_r - ik_ϕ B_zδ v_ϕ + ik_ϕ B_ϕδ v_z - B_zδ v_r/ r =0. §.§ non-ideal MHD limit: pure B_ϕ The non-ideal MHD effects manifest in the induction equations, iΔωδ B_r + ik_ϕ B_ϕδ v_r +η_O[^2δ B_r/ r^2 + 1/rδ B_r/ r-(k_ϕ^2+k_z^2)δ B_r-δ B_r/r^2-2/rik_ϕδ B_ϕ] +η_A[ -δ B_ϕ/r -δ B_ϕ/ r + ik_ϕδ B_r ]ik_ϕ +η_H[- ik_ϕδ B_z + ik_zδ B_ϕ]ik_ϕ =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r - B_ϕδ v_r/ r - ik_zB_ϕδ v_z + [η_O+η_A][^2δ B_ϕ/ r^2 + 1/rδ B_ϕ/ r-(k_ϕ^2+k_z^2)δ B_ϕ-δ B_ϕ/r^2+2/rik_ϕδ B_r ] +η_H[(- ik_zδ B_r + δ B_z/ r)ik_ϕ + (- ik_ϕδ B_z + ik_zδ B_ϕ) 1/r] =0, iΔωδ B_z + ik_ϕ B_ϕδ v_z +η_O[^2δ B_z/ r^2 + 1/rδ B_z/ r-(k_ϕ^2+k_z^2)δ B_z] +η_A[-k_ϕ^2δ B_z + k_ϕ k_zδ B_ϕ] +η_H[ -δ B_ϕ/ r - δ B_ϕ/r + ik_ϕδ B_r ]ik_ϕ =0. §.§ non-ideal MHD limit: pure B_z The linearized induction equations are written as iΔω δ B_r + ik_z B_z δ v_r +η_O[^2δ B_r/ r^2 + 1/rδ B_r/ r-(k_ϕ^2+k_z^2)δ B_r-δ B_r/r^2-2/rik_ϕδ B_ϕ] +η_A[-k_z^2δ B_r - ik_zδ B_z/ r] +η_H[-ik_ϕδ B_z + ik_zδ B_ϕ]ik_z =0, iΔωδ B_ϕ + [ v_ϕ/ r-v_ϕ/r] δ B_r + ik_zB_z δ v_ϕ +η_O[^2δ B_ϕ/ r^2 + 1/rδ B_ϕ/ r-(k_ϕ^2+k_z^2)δ B_ϕ-δ B_ϕ/r^2+2/rik_ϕδ B_r ] +η_A[k_zk_ϕδ B_z - k_z^2δ B_ϕ] +η_H[- ik_zδ B_r + δ B_z/ r]ik_z =0, iΔωδ B_z +B_z[1/L_ρδ v_r-iΔω/c_s^2δΨ] +[η_O+η_A][^2δ B_z/ r^2 + 1/rδ B_z/ r-(k_ϕ^2+k_z^2)δ B_z] +η_H[-δ B_ϕ/ r- δ B_ϕ/r + ik_ϕδ B_r ]ik_z =0. § SPURIOUS MODES In a pure B_z field, the ambipolar term in z-component of the induction equation shares the same expression as the resistivity term. Unfortunately, we found that the growth rates of RWI increase indefinitely with k_z for Am>0.2. This trend persists even at a resolution of N=512 for spectral method. We suspect these modes are spurious and cannot be eliminated by high numerical resolutions. Future studies should focus on addressing this issue.
http://arxiv.org/abs/2407.03169v1
20240703144249
Investigating Decoder-only Large Language Models for Speech-to-text Translation
[ "Chao-Wei Huang", "Hui Lu", "Hongyu Gong", "Hirofumi Inaguma", "Ilia Kulikov", "Ruslan Mavlyutov", "Sravya Popuri" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Mu- and tau-neutrino elastic scattering in Borexino Louis E. Strigari July 8, 2024 =================================================== § ABSTRACT Large language models (LLMs), known for their exceptional reasoning capabilities, generalizability, and fluency across diverse domains, present a promising avenue for enhancing speech-related tasks. In this paper, we focus on integrating decoder-only LLMs to the task of speech-to-text translation (S2TT). We propose a decoder-only architecture that enables the LLM to directly consume the encoded speech representation and generate the text translation. Additionally, we investigate the effects of different parameter-efficient fine-tuning techniques and task formulation. Our model achieves state-of-the-art performance on CoVoST 2 and FLEURS among models trained without proprietary data. We also conduct analyses to validate the design choices of our proposed model and bring insights to the integration of LLMs to S2TT.^*Work done during internship at Meta AI § INTRODUCTION The task of speech-to-text translation (S2TT) involves converting audio signals in one language into text in another, which is crucial for enabling cross-lingual communication. Traditionally, S2TT has employed a cascaded architecture with separate automatic speech recognition (ASR) and machine translation (MT) components <cit.>. Recently, the emerging end-to-end (E2E) approach, which integrates audio encoding and text decoding into a single process, has gained popularity for the benefits of error propagation mitigation and latency reduction <cit.>. While it has achieved significant performance improvement, S2TT still suffers from poor out-of-domain generalization and failure to capture nuanced details, e.g., slangs and cultural differences <cit.>. Large language models (LLMs) have emerged as powerful techniques for natural language processing (NLP) due to their excellent reasoning capabilities and generalizability. They excel at generating text for a wide range of tasks based on large-scale pre-training <cit.>, instruction fine-tuning <cit.>, and reinforcement learning from human feedback <cit.>. LLMs are also known for their fluency and diverse domain coverage, which could potentially mitigate the generalization gap for S2TT models. However, it is still under-explored as to how LLMs should be integrated to improve S2TT performance. In this paper, we aim to examine various aspects of adapting decoder-only LLMs to S2TT, including architectural design, parameter-efficient fine-tuning, and taks formulations. We propose a decoder-only architecture that directly consumes continuous speech representation instead of discretized tokens. Our proposed model achieves state-of-the-art S2TT performance without relying on large amount of proprietary data. Furthermore, we analyze design choices of each aspect of our experimental pipeline. Our contribution can be summarized as the following: * We propose a decoder-only architecture for integrating LLMs to S2TT. * Our proposed model outperforms state-of-the-art S2TT models on CoVoST 2 and FLEURS without training on proprietary data. * We conduct analyses to validate our design choices, which we hope could facilitate future research on S2TT with LLMs. § RELATED WORK §.§ Speech-to-text Translation Speech-to-text translation has seen significant progress, especially for end-to-end models. To solve the data scarcity issue of training end-to-end models, multiple large-scale datasets have been collected, e.g., MuST-C <cit.>, CoVoST <cit.>, Common Voice <cit.>, and VoxPopuli <cit.>. Recent studies have started to focus on multilingual S2TT, where a single end-to-end model supports multiple translation directions <cit.>. The advent of pretrained models in language <cit.> and speech <cit.> have facilitated new state-of-the-art models that leveraged the pretrain-then-finetune paradigm <cit.>. Our paper studies the integration of decoder-only LLMs to S2TT, which is still under-explored due to their new architecture and emerging capabilities. §.§ Speech and Audio LLMs With the emergence of large language models, studies have explored applying them to different modalities. LTU <cit.> fine-tuned LLMs on diverse audio datasets, thus enabling LLMs to reason given audio inputs. Furthermore, various works have explored extending the instruction-following capability of LLMs to speech and audio inputs <cit.>. While these methods make it possible for LLMs to handle a variety of speech and audio tasks, their performance on individual tasks often falls short of that achieved by specialized models. Another line of research focuses on adapting LLMs to a specific speech or audio task. Recent works have examined the integration of LLMs to automatic speech recognition, demonstrating their potential in understanding the content of speech <cit.>. Similar to our work, AudioPaLM <cit.>, Speech-LLaMA <cit.>, and SALM <cit.> aimed at leveraging LLMs to improve the state-of-the-art S2TT performance. AudioPaLM proposed to adapt LLMs to speech by discretizing speech representations and treat the discrete tokens as additional text tokens. Such method has two drawbacks, as shown in the original paper: 1) its performance is highly dependent on the quality of the speech encoder, and 2) the discretization makes fine-tuning the speech encoder hard, which requires fine-tuning the speech encoder with ASR first <cit.>. Our paper demonstrates that using continuous speech representations mitigates these issues, achieving better performance while being simpler. Speech-LLaMA and SALM both proposed briding LLMs and speech encoders with a modality adaptor and fine-tunes LLMs via LoRA <cit.>. Additionally, Speech-LLaMA introduced CTC compressor to shorten the speech input. Our paper adopts a simpler length adaptor in our architecture, and applies LNA fine-tuning <cit.> and demonstrates that it outperforms LoRA significantly. § OUR METHOD In this section, we introduce the task formulations (<ref>), the architectural designs of our model (<ref>), how the model is trained (<ref>), and parameter-efficient fine-tuning techniques (<ref>). §.§ Task Formulations The task of speech-to-text translation is to translate the source speech input S into the corresponding target translation Y = { y_1, ⋯, y_M } which is in the target language. Following prior work <cit.>, we define two formulations of our S2TT model: 1) the standard formulation where the model generates the target sequence directly f S → Y, and 2) the chained formulation where the model first generates the transcription in the source language then the translation in the target language f_chain S →{ Y_ASR, Y }, where Y_ASR denotes the transcription of the source speech. It is also common to include ASR during training as an auxiliary task, which is formulated as f_ASR S → Y_ASR. Therefore, we include f, f_chain, and f_ASR during training for multi-task training, and perform either f or f_chain during inference. §.§ Architecture Our model consists of a speech encoder and a text decoder, both using the Transformer architecture <cit.>. An illustration of the overall architecture is shown in Figure <ref>. Our speech encoder is based on W2v-BERT <cit.>, a self-supervised pre-trained speech encoder. For a given speech input S, we first convert the speech signal to fbank features with 80 mel banks, a context window of 25 ms, and a stride of 10 ms. The speech encoder E_s encodes the fbank features F = {F_1, ⋯, F_n} to their corresponding hidden representations E_s (F), where n denotes the sequence length of the fbank features. Speech frames are typically much more granular than text tokens. Therefore, we employ a length adapter on top of the speech encoder to reduce the length of the speech representations. The length adapter consists of a single 1-dimensional convolutional layer with a filter size and stride of k, which reduces the length of the speech representations by k-fold. The text decoder is based on LLaMA-2 <cit.>, a decoder-only large language model pre-trained on 2 trillion text tokens with a language modeling objective. The speech inputs and text inputs are encoded with their corresponding encoders, i.e., speech encoder for speech inputs and text embedding layer for text inputs. Subsequently, the encoded representations are concatenated and fed to the transformer decoder. In other words, we treat the encoded speech representations S the same as the text embeddings, without discretizing them as done in prior work <cit.>. A triangular mask is appied to the self-attention layers to restrict tokens from atteding to latter positions. More formally, given an interleaving sequence of text and speech sequences X = {X^1, F, X^2 }, where X^i = { x^i_i, ⋯, x^i_|x^i|} denotes a text sequence, X^1 denotes the prefix text, and X^2 denotes the suffix text. After encoding, the input sequence to the transformer decoder will be 𝐗 = {Emb(X^1), E_s(F), Emb(X^2) }, where Emb denotes the text embedding layer. Note that we flatten the sequences in 𝐗 before processing them with the decoder. Finally, we apply a linear transformation to the decoder outputs to obtain the logits for predicting the next token 𝐎 = W^⊤ D(𝐗), where D denotes the transformer decoder and W ∈ℝ^h × |V| is a trainable matrix where |V| denotes the vocabulary size. §.§ Training As described above, we include three formulations, i.e., f, f_chain, and f_ASR, for multi-task training. To let our model distinguish among tasks, we provide different instructions in natural language for each task t. The instructions include a description of the task, the source language, and the target language. We format the instruction I and the source speech S into the input sequence X with a template. The target sequence for training is formatted as: Y' = Translation: Y if t = f Transcription: Y_ASR if t = f_ASR Transcription: Y_ASR Translation: Y if t = f_chain. Given a source speech S, an instruction I, and the formatted target sequence Y', the training objective is to minimize the S2TT loss: ℒ(S, Y') = - 1/M'∑_i=1^M'log P(y'_i | S, I, Y'_<i) where M' denotes the length of Y' and P(y'_i | S, I, Y'_<i) denotes the probability of y'_i predicted by the model given the source speech and the prior tokens Y'_<i in the target sequence. The predicted probability is obtained by applying the softmax function to the logits 𝐎. §.§ Parameter-efficient Fine-tuning Large language models have billions of parameters, making it computationally expensive and inefficient to fine-tune all of the parameters during training. It is common to apply parameter-efficient fine-tuning techniques when fine-tuning LLMs on downstream tasks to improve efficiency and mitigate catastrophic forgetting. To this end, we employ and compare two parameter-efficient fine-tuning techniques in this paper: LNA fine-tuning <cit.> and Low Rank Adaptation (LoRA) <cit.>. §.§.§ LNA Fine-tuning LayerNorm and Attention (LNA) fine-tuning adapts pretrained language and speech models to S2TT by fine-tuning only the layer normalization and the multi-head attention layers <cit.>. This method greatly reduces the number of trainable parameters during fine-tuning and avoids catastrophic forgetting, thus improving the downstream performance for multilingual speech-to-text translation. Since the pretrained language model we use is a decoder-only transformer model, we apply LNA fine-tuning and fine-tune only the layer normalization and the self-attention layers in the transformer decoder. §.§.§ Low Rank Adaptation (LoRA) LoRA injects trainable rank decomposition matrices into the projections layers of a transformer model, which serves as a residual path in addition to a projection layer. During fine-tuning, only the decomposition matrices are updated, while all of the pretrained parameters are frozen. Thus, the number of trainable parameters is significantly reduced. The decomposition matrices can be merged into the original projection matrix after fine-tuning. Therefore, there is no additional computation nor additional parameters compared to the pretrained transformer model during inference, making LoRA a common technique for adapting large language models efficiently. § EXPERIMENTS §.§ Experimental Setup We train and evaluate our models on publicly available datasets. For training, we use CoVoST2 <cit.>, Common Voice 11 <cit.>, and VoxPopuli <cit.> datasets. CoVoST-2 is a speech-to-text translation dataset consisting of 21 languages. The dataset includes human-labeled translation pairs from 21 languages to English (X-En), and from English to 15 languages (En-X). Common Voice is a collection of speech-text pairs where the speech was recorded by annotators given the text transcription. VoxPopuli consists of speech from the European Parliament with the corresponding transcriptions and interpretations in 15 languages. We conduct in-domain evaluation on the test sets of CoVoST 2. Additionally, we perform zero-shot evaluation on FLEURS <cit.>, a dataset that aims to evaluate the out-of-domain generalizability of speech translation models. Note that for all datasets, we only use the directions that are present in CoVoST2. We report BLEU scores from SacreBLEU and additionally the model-based COMET score with the model wmt22-comet-da <cit.>. §.§ Implementation Details We employ a pretrained W2v-BERT <cit.> model that was released in <cit.> with 600M parameters that is pretrained on 4 million hours of speech data with a self-supervised objective as the speech encoder. The text decoder is initialized with LLaMA2-7B-chat <cit.>. We implement our models, training, and evaluation procedures with the Fairseq2 library[https://github.com/facebookresearch/fairseq2]. During training, the effective batch size is set to 800K speech frames, or 8000 seconds of speech inputs. We optimize the model with the AdamW optimizer and set the learning rate to 1e-4. The learning rate is warmed up for 5000 steps and linearly decayed until the maximum number of steps is reached, which is set to 60000. We fine-tune all parameters of the speech encoder and apply parameter-efficient fine-tuning methods to the text decoder. All experiments are conducted on 32 NVIDIA A100 GPUs. §.§ Baseline Methods We compare our model with various state-of-the-art baselines that were trained on the same set of public datasets as our method, i.e., CoVoST 2, Common Voice, and VoxPopuli. XLS-R <cit.> is a self-supervised cross-lingual speech representation model. ComSL <cit.> conducts self-training on the Common Voice dataset. Additionally, we implement an encoder-decoder baseline with W2vBERT as the speech encoder and NLLB <cit.> 1.3B as the text decoder. We also compare our model with models trained with proprietary data. Whisper <cit.> trains a robust speech recognition and translation model with large amounts of weak supervisions. USM <cit.> is an universal speech model pretrained with 12 million hours of speech data. Speech-LLaMA <cit.> shares a similar architecture with our model and was trained with in-house data and LoRA <cit.>. AudioPaLM <cit.> is the state-of-the-art method on CoVoST2 which is trained on proprietary data. We also include a variant of AudioPaLM that is trained on public datasets only for a fair comparison, which is reported in the paper <cit.>. §.§ Results The main results on CoVoST 2 are reported in Table <ref>. Our model achieves an average BLEU score of 37.1, which is the new state-of-the-are performance among models trained with public data only. Notably, our model outperforms the AudioPaLM variant which was trained on only public datasets, demonstrating the superiority of our proposed method. When compared to models trained with proprietary data, our model outperforms all of them and achieves comparable performance to AudioPaLM. These results demonstrate that our method integrates LLMs to S2TT efficiently and effectively. § DISCUSSION In this section, we conduct various experiments to analyze and discuss the details of our proposed method. §.§ Architectural Design With decoder-only LLMs, it is unclear as to which architecture performs the best for S2TT. We compare our decoder-only architecture with encoder-decoder models, with NLLB <cit.> and LLaMA-2 <cit.> as the text decoder. As shown in Table <ref>, our model significantly outperforms the encoder-decoder counterpart on both CoVoST 2 and FLEURS. Furthermore, encoder-decoder with LLaMA 2 even underperforms NLLB, demonstrating that encoder-decoder architecture are unsuitable for decoder-only LLMs. We hypothesize that it is the newly introduced encoder-decoder attention layers which are not pretrained that degrade the performance of encoder-decoder models. §.§ Parameter-efficient Fine-tuning We compare LNA fine-tuning, LoRA, and the effect of freezing pretrained models. As shown in Table <ref>, LNA fine-tuning significantly outperforms LoRA with various configurations. This result suggests that adopting LoRA, as done in prior work such as Speech-LLaMA <cit.>, is suboptimal for S2TT. Freezing the text decoder during fine-tuning yields even worse performance than LoRA, demonstrating the importance of fine-tuning the text decoder. Finally, freezing the speech encoder results in detrimental performance degradation. This result shows that fine-tuning the speech encoder is crucial for aligning the speech representation with the text inputs. We hypothesize that this leads to the underperformance of AudioPaLM with encoders that are not fine-tuned with ASR <cit.>, since the discretization of speech representations makes fine-tuning the speech encoder non-trivial. §.§ Ablation of Formulations Table <ref> shows the results of various combination of the formulations. Removing either f_ASR or f_chain degrades the S2TT performance. Notably, training with f and f_ASR slightly underperforms f, showing that multi-task training with ASR does not always improve performance. § CONCLUSION In this paper, we propose a decoder-only architecture that adapts a decoder-only LLM to the speech-to-text translation task. Our proposed method is simple and effective, achieving state-of-the-art performance and is comparable to the best-performing proprietary model. We conduct additional analyses to examine the effect of different design choices regarding architectural design, parameter-efficient fine-tuning, and task formulations. We hope that our findings could facilitate future work on leveraging LLMs in the S2TT task. IEEEtran
http://arxiv.org/abs/2407.02968v1
20240703100448
Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization
[ "Sushovan Jena", "Arya Pulkit", "Kajal Singh", "Anoushka Banerjee", "Sharad Joshi", "Ananth Ganesh", "Dinesh Singh", "Arnav Bhavsar" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CC", "cs.ET", "68T07", "I.2.10" ]
inst1]Sushovan Jena cor1 sushovanjena@gmail.com inst1]Arya Pulkit aryapulkit007@gmail.com inst1]Kajal Singh kajalsinghbainsla@gmail.com [inst1]organization=School of Computing and Electrical Engineering, addressline=Indian Institute of Technology, city=Mandi, postcode=175005, state=Himachal Pradesh, country=India inst2]Anoushka Banerjee anoushka.banerjee@hitachi.co.in inst2]Sharad Joshi sharad.joshi@hitachi.co.in inst2]Ananth Ganesh ananth.ganesh@hitachi.co.in [cor1]Corresponding author [inst2]organization=R&D Center, addressline= Hitachi India Pvt. Ltd , city=Bengaluru, postcode=560055, state=Karnataka, country=India inst1]Dinesh Singh dineshsingh@iitmandi.ac.in inst1]Arnav Bhavsar arnav@iitmandi.ac.in § ABSTRACT With the rapid advances in deep learning and smart manufacturing in Industry 4.0, there is an imperative for high-throughput, high-performance, and fully integrated visual inspection systems. Most anomaly detection approaches using defect detection datasets, such as MVTec AD, employ one-class models that require fitting separate models for each class. On the contrary, unified models eliminate the need for fitting separate models for each class and significantly reduce cost and memory requirements. Thus, in this work, we experiment with considering a unified multi-class setup. Our experimental study shows that multi-class models perform at par with one-class models for the standard MVTec AD dataset. Hence, this indicates that there may not be a need to learn separate object/class-wise models when the object classes are significantly different from each other, as is the case of the dataset considered. Furthermore, we have deployed three different unified lightweight architectures on the CPU and an edge device (NVIDIA Jetson Xavier NX). We analyze the quantized multi-class anomaly detection models in terms of latency and memory requirements for deployment on the edge device while comparing quantization-aware training (QAT) and post-training quantization (PTQ) for performance at different precision widths. In addition, we explored two different methods of calibration required in post-training scenarios and show that one of them performs notably better, highlighting its importance for unsupervised tasks. Due to quantization, the performance drop in PTQ is further compensated by QAT, which yields at par performance with the original 32-bit Floating point in two of the models considered. Anomaly detection, multi-class models, post-training quantization (PTQ), quantization-aware training (QAT), precision width, latency. § ACKNOWLEDGEMENTS This work is supported by Hitachi India Pvt. Ltd. § INTRODUCTION Anomaly detection (AD), also known as outlier detection, focuses on identifying data instances that deviate significantly from the established patterns of normal behaviour. In this context, these unusual instances are referred to as anomalies, while the data points adhering to the expected patterns are considered normal <cit.>. In computer vision applications, anomaly detection plays a critical role in identifying and flagging anomalous images, and one of the most promising use cases is automating the visual inspection of manufactured goods. While supervised techniques approach the anomaly detection/segmentation problem as imbalanced binary classification or segmentation tasks, they necessitate a meticulously labelled dataset encompassing both normal and anomalous images to facilitate training. In the manufacturing industry, optical inspection tasks often lack sufficient defective samples to facilitate supervised training due to high precision standards maintained for manufacturing. Moreover, the variations in the morphology of defects are relatively ambiguous, leading to an indeterminate distribution. As a result, unsupervised or weakly supervised methods rely solely on learning from defect-free images. On the other hand, unsupervised precise segmentation of pixels, targeting regions that exhibit abnormal or novel characteristics, presents a crucial and formidable challenge in numerous computer vision domains. There have been various works reported on the popular MVTec AD dataset <cit.> for unsupervised anomaly detection tasks. However, most of the existing state-of-the-art models on anomaly segmentation on MVTec AD are one-class (OC) models, where the model is trained on a particular class of object or texture and tested on the same class. This approach is way behind the current trend of multi-modal models and also incur significant cost of deployment where the model count increases with class-count. The OC models are also vulnerable to small variations inside a class as the features are highly biased towards a small domain. So, we focus on unified multi-class models which can work across large variety of objects with constraints of memory and latency. Based on performance and model size, we selected three SOTA methods, namely uninformed students (US) <cit.>, reverse distillation (RD) <cit.>, and STFPM <cit.> for 15-Class generalized training and tested the models class-wise. As our primary goal is to deploy the models on an edge device, we explore various quantization techniques from popular frameworks such as PyTorch (Torch) and TensorRT (TRT). We have compared the performance of Torch and TRT's post-training quantization (PTQ) in 8-bit Integer (INT-8) precision in terms of performance drop and latency. In PTQ, the weights and activations are statically quantized during inference time, due to which the local minima of the converged weights with respect to the error is no more the same. This introduces a quantization error, which is responsible for a drop in performance, although with a considerable reduction in model size and latency. Here, as data calibration is a recommended part of PTQ in almost every framework, we explore two distinct ways of performing the same (training data and random normal data calibration) with marked improvement in the latter. To compensate for the performance drop in PTQ, we also employ quantization-aware training (QAT) for fine-tuning the models, which simulated a quantization error during training, resulting in improved performance compared to post-training. Our major contributions in this work are as follows: * We experiment with generalized multi-class training of some considerably light weight methods, compare them with their one-class model performance, and suggest the generalizability of such models, which falls under a different bracket of Anomaly Segmentation methods, i.e. Unified multi-class models. * The selection of the methods (Knowledge-Distillation) is strictly done from the perspective of deployment on either a CPU or an edge device and achieving real-time inference. We believe that this study would be of good significance to the community working on Unsupervised generalisation and Anomaly Detection on low-resource devices. * More specifically, we provide experimental results for both memory footprints and latency as we are targeting resource-constrained environments, and we discuss that these are related not just to the network complexity but also on the Anomaly Scoring mechanism followed in respective methods. * We compare the performance of the multi-class models leveraging Quantization schemes on Intel Xeon CPU and Nvidia Jetson Xavier NX in terms of AUROC and inference time, which is important for practical consideration. * We analyze the PTQ performance in Torch with two different calibration strategies required for quantization, i.e., calibration using training data and random normal data, which result in a substantial gain in performance. From the deployment perspective, we demonstrate the PTQ performance with a normally distributed data calibration at different quantization precisions (16-bit Floating point (FP16), INT-8) using TRT on NVIDIA Jetson Xavier NX. * We leverage QAT in Torch and compare its results with PTQ (with two calibrations) and show that the performance of QAT (INT-8) is close to that of 32-bit Floating point (FP-32). * Finally, we are able to demonstrate that in some cases, even heavily quantized models do not result in a significant reduction in anomaly detection performance, which is an important practically useful revelation for this application. § RELATED WORK In this section, we discuss the major deep learning-based research works for anomaly detection and related approaches concerning deployment on edge devices. §.§ Deep learning frameworks for anomaly detection Some works on one-class detection include generative models like autoencoders [<cit.>, <cit.>] and GANs <cit.>. It is pertinent to highlight that these methods may sometimes yield unsatisfactory outcomes in terms of anomaly detection efficacy, largely attributed to simple per-pixel comparisons or imperfect reconstruction processes. Seminal research endeavors involving memory modules include MemSeg <cit.>, which uses simulated abnormal samples and memory information in the training phase. Some efforts effectively manage data with a high-dimensional attribute space, such as DeepSVDD <cit.> and PatchSVDD <cit.>. Although some recent models show promising results on multi-class anomaly detection, they either perform less in terms of AUROC or the network architecture is far more complex and computationally expensive, which does not make them suitable for edge device deployment and achieve considerable latency [<cit.>,<cit.>,<cit.>]. UniAD, <cit.> achieves AUROC of 96.5 on multi-class paradigm compared with multi-class experiments on existing OC models. The network consists of a neighbor masked encoder consisting of masked attention and fully connected layers and a layer-wise query decoder with a feature jittering strategy. Even if its performance is better than our best performing 15-class models by 1.5-3 %, it is more computationally expensive due to attention <cit.> layers. Another method, One-for-all <cit.> shows performance of 0.95 , which is very close to the 15-class model of RD. However, its architecture has vision transformers (ViT) as encoder and decoder, with proposal masking and coreset subsampling. Presence of transformer and coreset would have significantly high inference time than the models that we have experimented, they explored generative-based approach and used latent diffusion model <cit.> with feature editing for reconstruction. They have also shown results on multi-class and achieved mean-auroc of 98.5. Here, the use of U-net <cit.> with a diffusion model, which makes the approach costly on latency. §.§ Approaches involving edge device deployment To the best of our knowledge, we did not find existing works related to deployment of unsupervised anomaly detection models (trained on MVTec AD dataset) on edge devices. However, if we consider some other edge-device deployment cases <cit.>, which shows results of fabric defect detection on very efficient architectures like SSD and EfficientNet on Jetson TX2, where most of the models perform lower or close to our results. However, an important distinction in this case is that the datasets and models used are for supervised setting, unlike those considered in this work. Another work shows performance analysis of YOLOv3 on Jetson Xavier NX using Torch, TRT, and TensorFlow frameworks <cit.>. Nonetheless, there was no comparison of quantization performance between frameworks and the results are only on one precision of quantization, i.e. FP-16 on only one model. Pioneering investigations like a study conducted by Krishnamoorthi <cit.> present an overview of techniques for quantizing different CNN architectures like MobileNets and ResNets (across versions) with integer weights and activations, including post-training and quantization-aware training approaches in TensorFlow. They benchmarked latencies of quantized networks on CPUs and Qualcomm DSPs, contrary to our examination focused on unsupervised methods (especially knowledge distillation) on anomaly detection. § METHODOLOGY We shortlist three unsupervised anomaly detection approaches based on their performance, model sizes, and deployability. The goal is to analyze the generalization behaviour of the models and their deployment using two quantization techniques, i.e., PTQ and QAT. The discussion is brief and the reader is encouraged to refer the original papers and the implementation references for more details. §.§ Uninformed Students Bergmann et al. <cit.> proposes a student-teacher framework, for pixel-precise anomaly segmentation. The Knowledge Distillation first happens from a larger network like ResNet to a smaller network, a 5-layer convolutional neural network (CNN), which is the teacher. The student networks are then trained to regress upon the teacher's output as a target on the MVTec-AD dataset and so the knowledge gets distilled from teacher to students. In this process, the teacher and students' embeddings gets very close in the embedding space for normal (or non-anomalous) pixels. The anomaly score is the error between the mean predictions of the students' ensemble and the teacher's prediction. The intuition behind the anomaly score is that within anomalous regions during inference, the students' networks are expected to significantly differ from the teacher's output due to the absence of corresponding descriptors during training. This indicates the failure of student networks to generalize outside the non-anomalous data distribution. The score also considers the predictive variance of the Gaussian mixture of students' from their mean. §.§ Anomaly detection via reverse distillation Reverse distillation (RD) involves passing input through the teacher (encoder) network, a bottleneck network, and then through the student (decoder) network. The teacher (encoder) is responsible for extracting highlevel features from the input image. The bottleneck network plays a role in connecting the encoded features from the teacher network to the student network's decoder. The decoder processes the encoded features and aims to reconstruct the input image. So, the Knowledge Distillation here, happens from the Encoder to Decoder by matching the intermediate feature maps of both networks. But as the distillation happens from an encoder to decoder in the process of reconstruction of the inputs, so its termed as reverse distillation. Anomalies are detected based on the deviations of the reconstructed output from the student and the input image. Cosine similarity is used as the knowledge distillation (KD) loss for transferring knowledge between the teacher and student networks across multiple scales and layers. §.§ Student-Teacher Feature Pyramid Matching (STFPM) Following US <cit.> method, this method is an improvised framework where the multi-scale feature matching strategy is integrated to enhance anomaly detection performance. Here, the Knowledge Distillation happens from a pretrained ResNet-18 Teacher to a student ResNet-18 as we train the student to match the feature maps of Teacher network on MVTec-AD. The enhancement involves introducing hierarchical feature matching, which enables the student network to receive knowledge from multiple levels of the feature pyramid. Unlike the method of US, instead of distilling knowledge at multiple levels, the distillation happens only once, and the T-S networks are larger, i.e., ResNet-18. The strategy is to integrate both low-level and highlevel features in a complementary way to enhance anomaly detection at various sizes of anomalies. §.§ Quantization We now discuss the two quantization paradigms that we incorporated in this work, which contribute towards the practical deployment of the models on the edge device and towards model compression. §.§.§ Post-Training Quantization (PTQ) and Calibration In PTQ, weights, and activations are quantized to INT8 from FP-32. It follows a calibration process requiring representative input data to collect statistics for each activation tensor. It records the running histogram of tensor values and min / max values. Then, it searches the distribution in the histogram for optimal min/max values and scale factor, which would be used to perform quantization. The search for the min / max values and scale factor ensures the minimization of the quantization error with respect to the floating-point model. The data used for calibration should represent the range of values that the model would encounter during training or test phase. In an unsupervised setting, the test data contains very different images than the data used to train, and so it is difficult for the model to get a good scale during calibration. Hence, a random normal distribution is an optimal way to capture a generalized variance and, hence, the scale. The quantization itself is a process that maps a floating-point value x ∈[α, β] to a b-bit integer x_q∈[α_q, β_q], as x_q= round ((1 / s) ·x+z), where s is the scale-factor and z is the zero-point. More details about quantization can be found at <cit.>, with specifics for TRT and Torch at <cit.> and <cit.> respectively. §.§.§ Quantization-Aware Training (QAT) QAT enables the model to finetune and achieve better quantization-aware weights, which when quantized, should try to preserve original performance. The framework introduces fake-quantization modules in the model architecture, i.e., quantization and dequantization modules, at the places where quantization happens during the floating-point model to quantized integer model conversion to simulate the effects of clamping and rounding brought by integer quantization. The fake-quantization modules will also monitor scales and zero points of the weights and activations. Once the QAT is finished, the floating-point model could be converted to a quantized integer model immediately using the information stored in the fake-quantization modules. During training, the rounding error keeps accumulating across samples, and as the overall loss is minimized, the rounding error also gets minimized. As a result, we have weights corresponding to the minima, which, when quantized, typically preserves the performance of the model. Thus, as the weight updating process simulates the quantization error, they converge to the minima, close to that in the floating-point case. § EXPERIMENTAL RESULTS AND ANALYSIS Here, we discuss the various experiments and results. First, considering our requirement of a unified multi-class model for all classes, we trained the three shortlisted methods with combined data of all classes to assess their generalization capabilities. We indicate that the models trained on a particular class and then tested only on that class (as is done in the existing works) as one-class (OC) models. Hence, we have two different models, i.e., multi-class (15-Class) and OC for each of the three methods, i.e., US <cit.>, RD <cit.>, and STFPM <cit.> (Section <ref>). Secondly, for the case of deployment on Nvidia Jetson Xavier NX (Jetson), we assessed the performance and latency of the non-quantized (FP-32) models on CPU and Jetson, which can give us a practical understanding of the speed-up in the Jetson device (Section <ref>). Third, to achieve better latency and lesser model size using PTQ and the deployment on the Jetson device, we considered two well established frameworks, i.e., PyTorch (Torch) and TensorRT (TRT). As part of PTQ, we explored two modes of post-training calibration (Section <ref>). Adhering to the best calibration method, we worked with FP-16 and INT-8 quantization on TRT (Section <ref>). Finally, we note that the performance of the INT-8 quantization especially drops for PTQ. To overcome this, we then further use quantization-aware training (QAT), and demonstrate the significant improvements of (QAT) over (PTQ) (Section <ref>). §.§ Experimental Settings §.§.§ Nvidia Jetson Xavier NX and Intel Xeon CPU Jetson Xavier NX is an edge-computing platform from NVIDIA designed for autonomous machines and intelligent edge devices. It is built around the Xavier SoC (system-on-chip), which combines a high-performance CPU, GPU, and dedicated AI acceleration engines into a single chip. The device is built on a 6-core NVIDIA Carmel Arm 64-bit CPU and 384-core NVIDIA Volta GPU microarchitecture. This type is an advanced-level model of the Jetson family; the Xavier NX delivers a peak performance of 21TOPs. Our experiments utilized the 16GB RAM and 30  W power mode variant. The CPU results are on Intel ^ Xeon ^W-2265,3.56 GHz base frequency, built with 12 cores 2 threads per core. It is equipped with 64 GB DDR4 2933 RAM. §.§.§ Multi-class (or 15-class) Training We followed the official implementation for RD at <cit.> and for STFPM at <cit.>. For US, we consider the implementation at <cit.>. For the 15-class training of the mentioned models, we pass data of all classes in batches after shuffling to avoid bias or catastrophic forgetting. For US, we train the teacher on 15 classes. The batch size and hyper-parameter settings for each method is mentioned in Table <ref>. All the implementations are in Torch. §.§.§ Quantization Implementation For PTQ and QAT of all the models, we are only quantizing the student network using default settings of Torch quantization on FBGEMM (Facebook GEneral Matrix Multiplication) backend while the teacher part of the network remains in FP-32. It is because for RD and STFPM, the teacher uses pretrained weights and only student gets trained, so quantizing only the trainable part allows us to implement QAT on that and it is also evident that this design resulted in 37 % to 61 % reduction in model size (across all models) and hence latency. For US, we quantized all the three student networks. Similarly, for STFPM, only the student network was quantized. In case of RD, we quantize the bottleneck and decoder (student) networks for the same, but during implementation we found that "torch.nn.ConvTranspose2D" module used in the decoder part of RD, is not supported for quantization in FBGEMM (more details are mentioned in <cit.>). So, we kept that part of decoder in FP-32 and the rest parameters are quantized to INT-8. §.§ Comparison of One-Class model and Multi-Class model (only on FP-32) Table <ref> shows the performance comparison of OC and 15-class models for all the methods, averaged over all classes. Fig.<ref> shows the class-wise performance. Fig.<ref> also depicts some qualitative results on images, where the anomaly detection heat maps are shown. Based on this, we can note the following: * Table <ref> shows the generalization capability of different methods. It can also be inferred from Fig.<ref> that the classwise performance of OC and 15-Class models are nearly equal (and high) for most of the classes for RD and STFPM, with US being an exception, where the performance fluctuates among some classes. Overall, the average AUROC is very similar between the OC and the 15-class case. * It is evident that RD and STFPM, which yield high results in the OC case, are also able to generalize very well under the multi-class setup. This can be due to the presence of a larger architecture like WideResNet-50 in RD and ResNet-18 in STFPM as compared to a 5-layer architecture in US <cit.>. Interestingly, in the case of US, the generalized results are in fact somewhat better than the OC case, but the absolute AUROC values are not as high as the other two methods, and it is also not consistent across classes. Hence, the RD and STFPM results may be considered more stable and reliable for generalization. * Also, the matching of intermediate feature maps during training of STFPM and a similar approach of multi-scale feature-based distillation followed in RD, are actually able to capture the different scales of anomalies across different classes of objects/textures better. STFPM and RD approaches have leveraged combining information from different intermediate layers of the network. It is observed from Fig.<ref> that RD and STFPM show less class-wise variation in accuracy (measured in AUROC) in comparison to US, thus generalizes better across classes. * RD and STFPM perform very similarly, both for the OC as well as for the 15-class cases. However, STFPM also shows a high AUROC, with a significant improvement in latency (less inference time) than the former (Table <ref>). The low latency of STFPM can be attributed to its 18-layer ResNet than a 50-layer WideResNet in RD. Also, the presence of a Bottleneck in RD, used to project the teacher model's high-dimensional representation into a low-dimensional space, to be passed to the student decoder, should also be adding more to the inference time. From Fig.<ref> it can be noticed that 15-class models focus on the defects with higher activation values. * From the qualitative perspective, it is observed in Fig.<ref> that the small differences in the AUROC are due to the local variation of the detected anomaly regions and not due to significant changes (e.g., false positives elsewhere). This is encouraging, as in real-world defect detection, the performance of generalized models, which are marginally lower than OC models, would not be of significant concern. This is because the lower performance is due to pixellevel errors at a local level, which are negligible, as the overall defect localization is still correct. Thus, the generalized models are able to localize the defective part as well as the OC models. Note that in this dataset, the object appearance is quite distinct across different classes. Hence, the feature distributions of one object class are likely to be different from others. In such a case, in hindsight, it is not surprising that the anomalies, which are deviations of features from normality, will not overlap with features of other object classes, which are altogether different. This shows that in such cases, generalized models can be considered quite reliable, and there is no need for having separate models for each class, which is also validated via the experiments. Hence, in the next subsections, we only show the results for 15-class models. §.§ Comparative analysis of 15-class/multi-class FP-32 models on CPU and Jetson As we proceed toward the device deployment of these methods, we now show the comparison of the Torch FP32 model between the CPU and the Jetson device in Table <ref>. Thus, the framework is the same (Torch) and the devices are different (CPU vs Jetson). We observe and infer the following from this: * While the drop in latency is expected on the Jetson device, the order of decrease is a significant 5 to 13 times across different models. Even if we only consider the best performing models (RD and STFPM), the reduction is 5 to 7 times without any loss in AUROC. It is because of the presence of a 256-core GPU in Jetson. This comparison is intended to show real-time deployment use cases in a commonly used CPU and low-powered edge GPU. * If we observe the model size and inference time across the models, an interesting observation is that even if US model is the lightest of all, it takes the highest time. This is due to the presence of a local feature extraction approach (fast dense feature extraction) <cit.>, where a patch is extracted for every pixel of the whole image at once using pooling and striding layers. * STFPM performs best in latency and AUROC while having the lowest model size. It has both the teacher and student as ResNet-18, where the anomaly scoring is done by taking a squared difference of the intermediate feature maps, specifically 4th, 5th and 6th layers, which have 64, 128 and 256 channels respectively. Before the squared difference, each layer is normalized across the channel dimension. This process makes the scoring process more efficient than others. As the other two methods considered, generate pixel level dimensions without going for patches, their inference is significantly accelerated. The slower performance also sheds light on the mechanism of anomaly scoring of a model having a contribution in the latency as that is different in all the three methods. Another feature adding to the time is the presence of an ensemble of three student networks along with a teacher. In US method, the anomaly scores are calculated by taking the regression error between teacher's embedding and the ensemble-mean of three students' embedding. In total, four networks (one teacher + three students) are involved during inference. It also involves a predictive variance computation where the variance of the 3 students is considered from their mean, which adds to the time. §.§ Performance of Post-Training Quantization (PTQ) on PyTorch with different calibration strategies To reduce the latency and memory footprint, we implemented PTQ in Torch. Typically, post-training requires a calibration process to capture the dynamic range of activations when calibrated on training data. Hence, random data calibration almost results in similar statistics. During calibration, the scale-factor and zero-point is calculated while mapping from 32-bit to 8-bit (which is expected to reduce some performance over the FP-32 case). We have experimented with the recommended way of calibration on training data and explored another way of calibrating on a random normal distribution. Some discussions regarding this are stated below: * Although training data calibration is most common but in the case of unsupervised datasets like MVTec-AD, where the training data only consists of normal (or nonanomalous) images and test data contains both normal and anomalous images, only training data-based calibration may not consider the range of activations for anomalous images. So, we have devised another approach of calibrating on a randomly generated normal distribution, which is expected to simulate a more general subset so that the dynamic range of activations can better approximate for normal and anomalous pixels. * It can be concluded in Table <ref>, that random normal data calibration has resulted in a significant boost in performance of 8 % and 15 % for STFPM and RD over calibration with training data, which is due to the above stated reason. For US, there is no improvement, where the range of activations might already have been good on training data only, which may be because of the ensemble of students already introducing some variance. §.§ Performance comparison of different Quantization precisions using TensorRT on Nvidia Jetson NX We next show the results on the Jetson device but with different precisions of quantization (Table <ref>). Culminating from the experimentation of two calibration strategies on Torch (in Section <ref>), we opted for the same random normal data calibration for post-training quantization on TRT. The revelation also equips us with the computational benefit of not having to calibrate on the entire training data, which is not suitable for an edge device considering its memory and speed constraints. TRT is the recommended SDK for high performance deep learning inference on Jetson NX. We have leveraged its capabilities on the same. The discussions on Table <ref> and figures are as follows: * We note that there is a reasonably good reduction of model size for the FP-16, which further reduces for the INT8 case over the FP-32 case. As FP-16 uses half the bits compared to 32-bits for single precision, it lowers the memory usage and leads to faster inference and data-transfers. FP-16 precision is only experimented on TRT on Jetson and not on Torch as the inference time for TRT FP-32 was already 510 times lower on edge device than CPU. * On the same lines, the inference time reduces significantly over the FP-32 case, especially when the FP-32 time is large (26 times and 73 times in US and RD cases), while for STFPM the FP-32 inference is itself fast, which is further increased on Jetson. However, the time difference is small between INT-8 and FP-16 versions. * Despite the reduction in memory size and inference time, it is interesting to note that the mean AUROC for FP16 is not too low as compared to FP-32 model. Moreover, for the RD and especially for STFPM, even for INT-8, a high performance is maintained. * As STFPM proves to be the optimal model, we consider analysing its visualizations on Jetson. Scrutinizing its anomaly maps in Fig.<ref>, it is indicative that the localisation of anomalous pixels in INT-8 is almost identical to that of FP-16, which consequently signifies that the slight decrease in AUROC does not affect the comprehensive anomaly detection efficacy. * For the purpose of comparison of PTQ INT8 between frameworks (Torch and TRT) between Table <ref> and <ref>, Mean AUROC serves as the primary parameter and so the distinction in device (CPU or Jetson) does not affect the AUROC. It can be clearly observed that performance (AUROC) of RD and STFPM (the two superior models) are better in the TRT case with 0.07 to 0.09 relative difference than the Torch counterparts. §.§ Difference in PTQ of PyTorch and TensorRT The significant difference in AUROC performance between PTQ of Torch and TRT (both Random Normal Data calibrated) throws light on the effectiveness of the methodology followed in the two frameworks. Below, we summarize the key differences in PTQ methodology followed in Torch vs TRT frameworks: * During the process of calibration, where we capture the dynamic range of values for weights and activations of the network on a subset of training data. The values are observed in a Histogram where we get a minimum and maximum boundary. We also calculate the scale factor which is required for conversion from FP32 to INT-8. In this process, we select the optimal threshold (min. and max.) on FP32 range to map them to INT8 range. In case of TensorRT, this is done by generating many quantized distributions with different thresholds and selecting that threshold (or corresponding distribution) which minimizes the Kullback-Leibler (KL) divergence between two distributions (FP32 and INT8). As the conversion is just a reencoding of information between two models, KL-divergence (or relative entropy) measures the loss in information between the distributions. After calculation of optimal threshold and hence scale-factor, the values are quantized. * Similar process is followed in PyTorch to calculate the min. and max. values by generating a number of quantized distributions for different min / max values but the error is calculated using L2 (Euclidean) Error between the FP32 distribution and quantized INT8 distribution. It involves determining the distances of each bin's content in the Histogram from the corresponding position in the two distributions. The search terminates when the optimal min/max values are found within a specified tolerance or after a maximum number of iterations. As the AUROC of PTQ with TensorRT is better in our experiments, this gives us an insight that minimizing the KL-divergence loss for calibration has worked better in the category of models and data considered in this study. §.§ Performance analysis of QAT and PTQ As opposed to PTQ, which does not involve training, there is another quantization paradigm termed as quantization aware training (QAT). As QAT involves training during the quantization process, this may imply that the performance of QAT is likely to be better than PTQ. Hence, we also experiment with QAT which reveals some interesting results given in Table <ref>, and discussed below: * It is clearly observed that the performance of QAT is significantly better than PTQ for two models. AUROC of non-quantized RD model and QAT model remains the same while for STFPM also, there is a drop of only 2 %. * In PTQ, we place observers around the weights and activations and perform a calibration process, where the training data is passed once through the model. In this process, the observers capture the dynamic range of the weights and activations, which is required to calculate the scale-factor and zero-point. Despite the calibration process, as the weights are quantized after the training, a quantization error is introduced in the model's prediction, resulting in loss of performance. * As discussed in Section <ref>, in QAT, we load the already trained model weights and introduce fake-quantize modules, where float values are rounded to mimic INT-8 but all computations are still done in floating-point. We then trained it for a few epochs, where the usual way of minimizing the training loss is implemented. As there is a simulated quantization error in the overall loss of the model, the same gets minimized during fine-tuning for a few epochs and we have quantize-aware weights. RD has at least four times higher latency than STFPM post QAT quantization, and only 0.04 higher AUROC point performance. Thus, STFPM can also be used where latency is critical. * Here, we observe that QAT clearly exhibits enhanced performance than PTQ for two methods, although the random normal data calibration method performs quite better than training data calibration. However, QAT, even for the INT-8 quantization demonstrates superior performance, which is in fact, close to the original FP-32 performance in the case of RD and STFPM. * We note that for PTQ case, although the random calibration AUROC is good for RD and STFPM, there is still gap between FP-32 and quantized models, which is interestingly overcome in TRT for Jeston use case. Contrastingly, for QAT even for CPU deployment, such a gap does not exist as the top performing models (STFPM and RD) after quantization, yield results close to FP-32, obviating the need for edge device demonstration. §.§ Overall comparative analysis of FP32, PTQ and QAT Finally, for a comprehensive assessment of different frameworks, precisions, we include most of the important findings from the above tables into a single one (Table <ref>). Presently, PyTorch officially does not support Quantized model inference on CUDA (NVIDIA drivers). Hence, it is not possible to deploy PTQ and QAT models on NVIDIA Jetson. The same reason is behind showing performance on Intel CPU. Finally, the overall insights from Table <ref> are discussed below: * Referring to the FP-32 column, it is a clear conclusion that an edge device such as NVIDIA Jetson is able to boost the inference speed by more than 5 times than that in CPU. This comparison is helpful in context of budget constraints in deployment of mentioned models. * The Avg. Inference Time and Model Size of PyTorch INT8 model is significantly lesser than that of FP-32 model on CPU with 0.11 points reduction in AUROC. This is due to the reduction in precision and hence efficient matrix computations. * The drop in Mean AUROC for TensorRT INT8 model on Jetson is just 0.02 as compared to FP-32 model, whereas the drop is 0.11 in case of PyTorch INT8. Such a significant difference indicates the efficacy of PTQ methodology followed in TensorRT (discussed in Section <ref>) over that of Pytorch. While comparing the AUROC in Table <ref>, One very important consideration required is, we are not discriminating between devices such as CPU and Jetson as that does not affect the AUROC and only to be considered for inference time. We are also not considering the distinction in frameworks (Torch or TRT) as the same Torch model is converted to TRT using 'torch2trt' library and is the only possible way to deploy in Jetson as other libraries quantization is not supported (Points A and B in important issues of this section). * It's clearly concluded from Table <ref> that QAT (INT-8) performance is very close to FP32 models due to quantizeaware weights and activations resulted from finetuning, having inference time same as PTQ (INT-8) models. § CONCLUSION In this work, we focused on the task of anomaly detection on materials considering the practically important perspectives of a) generalization across object classes, b) using lightweight knowledge-distillation based models, c) further quantizing them with two schemes and analysing their performance aspects such as AUROC, latency, and model-size, and d) their deployment on an edge device. The models that we consider here also differ in their architectural designs, thus providing a variety of operational schemes, one with a patch-based knowledge distillation approach (US), other with an improved version without patching, and a multi-scale strategy (STFPM), and the last one following an encoder-decoder (RD) combined with multi-scale distillation. First, with the experimentation on multi-class training, we establish the invariance of these to the multiclass setting for this dataset where the object appearance is quite distinct, thus obviating the need for the model-per-class paradigm. Secondly, for industrial deployment, we also assess their latency on CPU and an edge device (Nvidia Jetson NX ) and implement different quantization strategies to reduce the model size as well as inference time. Further, for quantization it is shown that an unconventional calibration based on the random data works much better than the standard calibration using training data, which reduces our dependence of training data. For the purpose of deployment on Jetson, we leveraged the TRT library for PTQ across two precisions, showing TRT's effectiveness over Torch for majority of models. Finally, with an intention of further bringing the performance of the quantized model close to the un-quantized FP-32 model, both PTQ and QAT are considered, comparing their performance in CPU using Torch. This yields a very encouraging result that the quantized model with QAT (even in case of an 8-bit quantization), performs as good as the original FP-32 model for the two high performing methods. Thus, overall, we have established that the performance of generalized, quantized models on an edge device can be as good as the original models and yet their model size and inference time can be made suitable for the operational viability in industrial settings. elsarticle-num
http://arxiv.org/abs/2407.02447v1
20240702172404
PLeaS -- Merging Models with Permutations and Least Squares
[ "Anshul Nasery", "Jonathan Hayase", "Pang Wei Koh", "Sewoong Oh" ]
cs.LG
[ "cs.LG" ]
Statistical Advantages of Oblique Randomized Decision Trees and Forests Eliza O'Reilly[Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, 21218 (eoreill2@jh.edueoreill2@jh.edu)] ======================================================================================================================================================= § ABSTRACT The democratization of machine learning systems has made the process of fine-tuning accessible to a large number of practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalities. However, prior approaches are restricted to models that are fine-tuned from the same base model. Furthermore, the final merged model is typically restricted to be of the same size as the original models. In this work, we propose a new two-step algorithm to merge models—termed —which relaxes these constraints. First, leveraging the Permutation symmetries inherent in the two models, partially matches nodes in each layer by maximizing alignment. Next, computes the weights of the merged model as a layer-wise Least Squares solution to minimize the approximation error between the features of the merged model and the permuted features of the original models. into a single model of a desired size, even when the two original models are fine-tuned from different base models. We also present a variant of our method which can merge models without using data from the fine-tuning domains. We demonstrate our method to merge ResNet models trained with shared and different label spaces , and show that we can perform better than the state-of-the-art merging methods by 8 to 15 percentage points for the same target compute while merging models trained on DomainNet and fine-grained classification tasks. § INTRODUCTION With the widespread democratization of machine learning, there has been a rapid increase in the availability of open-source models trained by the community on specific tasks and datasets. Such specialized models exhibit unique strengths and weaknesses. For example, Code Llama <cit.> (fine-tuned from Llama-2) is specialized for coding tasks, while Vicuña 1.3 <cit.> (fine-tuned from Llama-1) is specialized for chat. They have the same architecture but are fine-tuned starting from different pre-trained models: Llama-1 and Llama-2. Such diversity in the combination of pre-training data and fine-tuning tasks will increase as decentralized marketplaces for models become increasingly more common, e.g., <cit.>, providing practitioners with more choices. This presents an opportunity to combine such specialized models in order to create a single general-purpose model that can handle multiple tasks. Traditional approaches for combining trained models such as ensembling <cit.> or domain-specific mixture-of-experts (e.g.<cit.>) take a step towards this goal. However, these methods need to store all the component models at inference time, leading to an increased memory footprint. Practitioners with limited memory capacity cannot use such costly approaches with fixed memory footprints, especially when combining large models, deploying to resource-constrained environments, or for applications demanding a memory-performance trade-off. To this end, recent works <cit.> have proposed new algorithms tackling this problem of model merging. However, their scope is limited to merging models fine-tuned from the same pretrained model. Further, some recent works <cit.> also need access to the training data used to fine-tune the component models, which limits their applicability in situations where such data is not available due to, for example, privacy or legal reasons <cit.>. In this paper, we address the problem of merging models (sharing the same architecture) trained on different datasets starting from different initializations. This is motivated by prior work (e.g., <cit.>), which we compare with in Section <ref> for merging ResNet models. Since transformer models exhibit more complex symmetries, we leave the task of merging such models for future work. To address the above-mentioned limitations of prior work in this space, we present —an algorithm which adaptively merges models for different inference compute budgets, and can work without requiring data that the component models were fine-tuned on. (short for Permutations and Least Squares) is a two-stage algorithm which works with models having the same architecture. The first step consists of matching features across the models. We harness the idea of permutation invariance in neural networks to find an appropriate pairing of features. Inspired by the Git Re-Basin <cit.> algorithm, which is designed for merging two models that are trained on the same data, we introduce a matching algorithm that finds permutations between similar features across models, while separating dissimilar features in the final merged model. This is critical when merging models trained on widely different tasks, since it prevents interference between features while still merging overlapping features, improving upon prior work such as ZipIt<cit.>, which merges all the neurons of some layers of the two models. This also gives fine-grained control over the width of each layer of the merged model. It can hence flexibly trade-off memory/compute and performance according to the deployment requirements. It has been observed that permutation matching alone suffers from significant performance loss when merging vastly different models, e.g. those trained on disparate data <cit.>. We hypothesize that while permuted features are powerful when ensembled, simply averaging the permuted weights degrades the features of the merged model. This results in the observed decline in performance. Hence, in the second step of , we solve a layer-wise Least Squares problem, so that each layer of the merged model mimics the permuted ensemble of features from the corresponding layer of the original models. This leads to better representations and superior down-stream performance. Apart from the target compute budget, is hyperparameter free, making it easy for practitioners to use. A schematic of is depicted in <ref>. We empirically demonstrate that can outperform prior work in the challenging setting of merging differently initialized models which have been trained on different datasets. We merge ResNet models fine-tuned on different datasets in <ref>, and find that improves upon the state-of-the-art by 8 to 15% with the same merged model size. Our empirical results are on subsets of DomainNet, and on fine-grained classification tasks. Further, , with significantly lower FLOPs, can approach the performance of ensemble methods in some cases (<ref>). The proposed approach can be seamlessly extended to the scenario where data from the fine-tuning domains is unavailable. We call this variant . This variant uses data from publicly available datasets (like ImageNet) to merge models. We demonstrate in <ref> that can perform competitively to which uses the actual data from the training domains of the component models. This is highly encouraging, as it demonstrates the applicability of in scenarios where data from the training domains is unavailable due to privacy or commercial reasons. In summary, our contributions are the following: * We generalize Git Re-Basin <cit.> to support partial merging of corresponding layers of two models (<ref>) and propose a strategy that automatically selects the target width of each layer of the merged model under a fixed FLOP budget (<ref>). This gives the practitioner the freedom to choose the size of the final merged model as per resources available at inference. Investigating this tradeoff is one of the goals in this work, e.g., <ref>. * Motivated by the success of ensemble methods, we propose to assign weights to the merged model by solving a least squares problem attempting to mimic ensemble methods at each layer (<ref>). Ablation study for this step is in <ref>. * On a test-bed of multiple datasets (with both shared and different label spaces), we showcase that outperforms recent merging methods by 8-15 percentage points.(<ref>) at the same model size. Further, approaches the ensemble accuracy while using 40% fewer parameters. When no training data is available, remains competitive to (<ref>). § RELATED WORKS There has been growing interest recently in merging models with minimal data and compute overhead. Here, we focus on methods which merge models with the same architecture. Merging models fine-tuned from the same initialization. Several methods aim to merge models in the weight space. <cit.> simply add up task vectors, the weight differences of fine-tuned models from the pretrained model, and demonstrate a strong baseline for merging fine-tuned models. Other approaches edit the task vectors based on magnitude of the weights <cit.> to resolve interference while merging. Some methods aim to find layer-wise <cit.> or parameter-wise <cit.> coefficients for merging different task vectors. However, such methods work with task vectors, assuming that the base pretrained model is shared across the fine-tuned models, and hence cannot be easily extended to settings where models are fine-tuned from different starting points. A different line of work <cit.> proposes layer wise distillation, aiming to minimize the sum of the ℓ_2 distances between the activations of the merged model and the original models. However, naively applying this to models which are vastly different leads to performance degradation, as we show in <ref>. Further, these methods do not provide a way to control the size of the merged model. Although not designed for this scenario of merging fine-tunes of a common pre-trained model, still allows us to achieve significant performance gains when the merged network is slightly larger than the original model (e.g. by 20%) as demonstrated in <ref>. Merging from two different initializations. We consider a less restrictive setting, where the models being merged can have different initializations. This has been studied in the literature, and existing works propose weight or activation matching algorithms for this task. Git Re-Basin <cit.> proposes an algorithm to compute permutation matrices to match the weights of the hidden layers of two or more neural networks. <cit.> investigate the usage of permutations to merge models trained on different datasets, however, their study is limited to wide ResNet models on MNIST and CIFAR datasets. These permutation symmetries have also been studied in <cit.>. Another recent work – ZipIt! <cit.>, tackles a similar problem of merging models finetuned on different datasets from different initializations. ZipIt! proposes an alternative formulation for this task, allowing for feature matching across and within models and puts forth a greedy algorithm to optimize for this. ZipIt! also allows for merging up to some layers of the component models. While this can provide a knob for controlling the size-performance trade-off of the merged model, the empirical performance of their proposed scheme can be improved upon, as we show in <ref>. On the other hand, our work describes a merging formulation which is more expressive and allows for partial merges even within layers to minimize feature interference. Finally, <cit.> also propose a method to merge networks layer-wise in a progressive manner, which involves light-weight retraining. However, their method requires domain labeled data at both training and inference time, while we require unlabeled data only and also propose a method using no data at all. Other merging paradigms Other model merging approaches include mixture of experts <cit.>, selecting experts using test data <cit.>, and sparse expert ensembles <cit.>. These come with larger compute or memory over-heads, both at inference and training time. § PRELIMINARIES Notations. For simplicity, we describe our method for two L-layered MLPs. However, it can be readily extended to convolutional and residual networks, as we demonstrate in our experiments. Let Θ^A = {W^A_1,W^A_2,⋯,W^A_L}, Θ^B={W^B_1,W^B_2,⋯,W^B_L} be the parameters of two MLPs A,B having the same architecture. We omit the layer-wise bias here for simplicity. Let z_i^A, z_i^B denote the input activations to the ith layer of each network respectively, and d_i denote the dimension of z_i^A,z_i^B. We also define Z_i^A, Z_i^B ∈ℝ^d_i × n to be the activations of a batch of n inputs. Note that z_1^A=z_1^B=x, and z_L+1^A = y^A, z_L+1^B = y^B. Finally, let {W_i^M:i ∈{1,2,⋯,L}} be the weights of the merged model. We allow the merged model to have varying widths (which can be different from the widths of the base model), depending on the memory and compute resources available. Background on Git Re-Basin. Our method is inspired by Git Re-Basin <cit.>, which aims to find permutation matrices π = {P_1, P_2, ⋯, P_L} to permute the weights of model B. The merged model is formed by permuting and averaging the weights, i.e., W_i^M = (1/2) (W_i^A + P_i W_i^B P_i-1^T). <cit.> propose a method for computing the permutation matrices by directly optimizing the average similarity between permuted weights of model B and the original weights of model A. This weight matching greedily finds a solution to the following sum of bilinear assignment problems, _π={P_i}_i=1^L ∑_i=1^L ⟨ W_i^A, P_i W_i^B P_i-1^T ⟩ , where P_0 is defined to be the identity matrix. This has an advantage of not requiring any data to solve the optimization, but an optimal solution is computationally intractable. Instead, when some samples are available to the optimizer, <cit.> propose a computationally efficient alternative called activation matching, which solves the following optimization problem: P_i ∈ _P ∈ S_d_i Z_i^A - PZ_i^B _F^2 . Here, S_d_i refers to the set of permutation matrices of size d_i × d_i. Computing the activations Z's require samples from the data. However, this optimization can be efficiently solved separately for each layer. § METHOD : We call our approach to model merging . We harness permutation symmetries to match features between two models, inspired by Git Re-Basin<cit.>. We extend this method to allow for partial merging of models, where each layer can have a different number of merged neurons. We then compute the weights of the final merged model by solving layer wise least squares problems to ensure that activations of the merged model resemble the permuted activations of the original models. §.§ Extending Git Re-Basin to partial merging Note that in Git Re-Basin, two models are averaged (after permuting one model) and hence the dimension of the merged model is the same as the base models. However, when the networks A,B are trained on different datasets, not all features might be compatible across models, and might interfere with each other if merged, leading to degraded performance. Further, those incompatible features might need to be retained in the merged model in order to make accurate predictions on both tasks. Merging all nodes in every layer discounts this possibility, leading to performance degradation, as we show in <ref>. To this end, we aim to merge features which are similar across the two models, while keeping those which are very different as separate features in the merged model. We hence propose a framework for partially merging model features by leveraging permutations. Given a permutation matrix P_i, we select k_i indices from [d_i] for which the distance between the features of model A and the permuted features of model B for layer i is the smallest. These k_i features are merged, while other features are retained separately in the final model. r0.5 < g r a p h i c s > Partial merging with permutations: We show the construction of the 7 × 6 weight matrix W_i^m from two weights of size 5 × 4. The merged inputs are copied and unpermuted to approximate the original inputs. Then we apply both weight matrices separately. Finally, we pair up the merged outputs and average the pairs. Since all operations used are linear, we can fuse them to construct W_i^m using a single linear layer. In particular, we find a subset J_i satisfying J_i ∈_{J : J⊆[d_i], |J|=k_i} ||Z_J,i^A - (P_iZ_:,i^B)_J ||_F^2. This is simple to implement: retain the indices with the smallest k_i distances between the (permuted) activations. For weight matching, we can retain the indices with the largest similarity between the (permuted) weights for each layer. The size of W_i^M is then increased to (2d_i-k_i) × (2d_i+1-k_i+1) in exchange for improved performance. This partial merging scheme is illustrated in <ref>. Investigating this trade-off between the size and the performance of the merged model is one focus of this paper. Note that the ratio of k_i/d_i can be chosen to be different for each layer. In <ref>, we introduce a scheme to find such a configuration of these ratios for a given target compute/memory budget B; this optimizes a proxy of the downstream performance without using any validation data from the target domain, and is used in all our experiments. The permutation matrices are computed using the weight matching strategy from Git Re-Basin. In <ref>, we compare this with using the activation matching strategy, which we call . §.§ Permuted least squares Suppose, for example, that the target merged model has the same architecture and size as each of the base models. Once the permutations, P_i's, have been computed, we propose optimizing the weight matrices of the merged model by solving the following least-squares problem: W_i^M ∈ _W ∈ℝ^d_i × d_i+1(Z_i^A+P_iZ_i^B) W - (Z_i+1^A+P_i+1Z_i+1^B) ^2 , independently for each layer i∈[L]. This is motivated by the impressive performance of the ensemble method (e.g., <cit.> and <ref>), which retains two separate models and only averages the (permuted) activations at the last layer (before softmax): z̃_L+1 = z^A_L+1+z^B_L+1. We aim to have our merged model approximate such activations. We inductively assume that the first i-1 layers are properly merged. Hence, the ensemble of the permuted features (of the i^ th layer) of the component models can be well approximated by the activations at the input of the i^ th layer of the merged model. We denote the ensembled features by Z̃_i=(Z_i^A+P_iZ_i^B) ∈ℝ^d_i× n. The goal of the above optimization is to match the ensembled activation of the next layer, Z̃_i+1=(Z_i+1^A+P_i+1Z_i+1^B), with a linear transform of the input ensemble: Z̃_i W. We empirically validate this choice of using a permuted ensemble of features to optimize the weights of the merged model in <ref> in <ref>, where we compare with alternatives choices to Eq. (<ref>). This second step of is similar to feature distillation. However, the key novelty arises from averaging the permuted features for transferring knowledge from multiple models. This is critical for accurate prediction. In <ref>, we compare against RegMean <cit.>, which optimizes an objective similar to <ref> without the permutations and averaging. This method merges models by minimizing Z_i^A W - Z_i+1^A ^2 + Z_i^B W - Z_i+1^B ^2. As we show in <ref>, RegMean performs poorly compared to . Apart from the inference computation budget for the final model, is completely hyperparameter free. Note that the second step of is fully compatible with the partial merging of <ref> as well: we can directly set the values of W_i^M corresponding to the unmerged features to be the respective values of W_i^A and W_i^B. While the objective in <ref> can be minimized in closed form using Ordinary Least Squares (OLS), we practically implement it using gradient descent for ease of use with convolution layers. Given that the objective is convex if computed layer-wise, the weight matrices W_i^M converge in relatively few steps (less than 100) of gradient descent. Further, we solve this optimization independently for each layer, which can be efficiently parallelized. §.§ Data requirements of has two steps. The first step finds permutations to match features using weight or activation matching. The second step computes weight matrices to mimic the ensemble of the merged features more closely. In order to compute these features, one could use the data from the training domains, however, this may not be feasible for privacy or commercial reasons. Hence, we propose an alternative scheme—dubbed —which uses a general vision dataset, like ImageNet, to compute the activations of the component models. These activations are then used to merge domain specific models without requiring any data from their training domains. In <ref>, we demonstrate that suffers minimal degradation compared to , suggesting a wider applicability. § EXPERIMENTS #1 #1 #1 #1 #1 #1 #1 #1 Data ablation for other datasets #1 #1 #1 §.§ Settings We show the effectiveness of our method in merging ResNet-50 models fine-tuned on different datasets starting with different initializations. We consider the following experimental settings: * <ref> and <ref> deal with the problem of merging models fine-tuned on different datasets with shared or different label spaces respectively. * <ref> considers the setting where we do not have access to data from the training domains. * <ref> deals with merging models which were fine-tuned from the same base model. * Finally, <ref> considers a setting where we want to merge differently initialized models trained on the same dataset, and <ref> provides an ablation study over the choice of loss functions. Unless specified otherwise, we use the weight matching strategy from <cit.> to compute the permutations for . For each task, we also report results for Permutations, which is the model obtained by weight averaging the component models after applying the permutations obtained from the first step of . Following the recommendation from REPAIR <cit.>, we recompute the batch-norm parameters of the model after merging for all methods. We run each merging experiment for three different seeds, and across two different initial models. We find that inter-run deviation in performance is low, with the standard deviation usually being less than 1%. We report disaggregated results along with these standard deviations in <ref>. Baselines We compare our method against prior works including Git Re-Basin<cit.>, Simple Averaging <cit.>, RegMean <cit.> and ZipIt! <cit.>. We also consider two practical upper bounds – training a router model based Mixture of Experts model (MoE), and ensembling the predictions (or activations) of the original models. The former requires storing both models and running one of them based on the router, hence having 1.1× FLOPs and 2× memory requirements, while the latter requires running both the models in parallel, and hence has 2× FLOPs and memory requirements. We find that the performance of the ensemble and MoE models is close to the best performance of a single model on the dataset that it was trained on. §.§ Models fine-tuned with a shared label space We fine-tune ImageNet pre-trained ResNet-50 models on four different domains of the DomainNet<cit.> dataset: Clipart, Infograph, Painting and Real. We merge models trained (from different initializations) on different domains in a pair-wise fashion, and compute the accuracy of the merged model on both the domains. For each domain, we average the performance across all domain pairs. We report this in <ref> for merged models at 1× size of the original model for different methods. We also compute the average performance of each algorithm across all datasets at different size/FLOPs budgets, and plot this out in <ref>. We find that consistently outperforms ZipIt! at various FLOPs budgets. The gains are particularly striking for lower FLOP budgets, where outperforms ZipIt! by up to 10%. The power of partial merging is also observed from these results, as one can see that increasing the flops by just 20% leads to massive improvements in the accuracies. Finally, one can see the importance of our layer-wise least squares, since it improves the performance of over Permutation by over 20% on average at 1× model size. The gains decrease as we add more capacity to the merged model, which is expected. §.§ Models with different label spaces We fine-tune models on CUB<cit.>, NABirds<cit.>, Oxford-IIIT Pets<cit.> and Stanford Dogs<cit.> datasets, and merge them upto the penultimate layer. Since the label spaces are different, we aim to evaluate the representations of the penultimate layer of these merged models by training a linear probe on top of the representations. We average the results in the same manner as for DomainNet, and report the performance of different methods for 1× model sizes in Tab <ref>. We also depict the compute-performance trade-off in Fig <ref>. In App <ref>, we follow the setting of <cit.>, and use task specific heads from the original models to compute the accuracy of the merged model, which requires knowing the domain of the test data point. Similar to the results on DomainNet, we observe that has non-trivial gains over ZipIt! and Git Re-Basin, outperforming ZipIt! by 7% at 1× model size. These results also provide evidence of the effectiveness of our partial permutation scheme — Permutations can outperform ZipIt! at intermediate model budgets by up to 3%. A reason for this could be that the features of the models being merged are sufficiently different, leading to performance degradation if all of them are forced to be merged (a la ZipIt!). Our scheme retains some of these features in the intermediate layers, which could explain the better performance. §.§ Does need data from the training domains? To investigate the data requirements of our method, we compare the performance of and when merging models trained on different datasets. We also compare the effect of using activation matching to find the permutations for , and we term this variant as . The performance-model size tradeoff is reported in <ref> for the two settings of shared and different label spaces. We find that retains a similar performance when using ImageNet instead of the actual domain data for merging models on DomainNet, achieving a drop of less than 1% in accuracy at 1× model size. There is almost no drop at higher sizes of the merged model. We also find that slightly outperforms , with the difference being around 1% at 1× model size. Notably, even on the more difficult task of merging models with different label spaces, using ImageNet data for computing activations can perform competitively to using the actual data: linear probing on the representations from performs within 2% of the at 1.2× model size, and the gap is less than 4% at 1× model size. This result is particularly encouraging, since it extends the practical applicability of . Note that while we use data from the actual domains for linear probing, i.e. to assess the quality of the representations, we do not use it for actually merging the models. We also find that performs similarly to , with the latter performing better in the shared label space setting. We hypothesise that this is because we use only few batches of data to compute the activations for matching the two network, which leads to deteriorated permutations. However, the difference in performance decreases with increasing model size. §.§ Merging models with the same initialization In Tab <ref>, we evaluate the performance of our method for merging models fine-tuned from the same starting model. We compare against simple average (Task Vectors), ZipIt, and RegMean and find that the performance is similar across methods, with being slightly better than the baselines. In fact, task vectors is an effective baseline here. However, we note that 20% extra parameters in the merged model can lead to closing the gap between the ensemble and the merged model produced by , demonstrating the need for flexible merging methods. § LIMITATIONS AND FUTURE WORK The scope of this study is limited to merging models with the same architecture, and applying to merge different architectures could be an interesting future direction. Since is a two-stage algorithm, its running time is greater than some existing works <cit.>. However, since the second step can be computed in parallel for all layers, the runtime over-head is small. We further discuss this aspect in <ref>. Finally, our study of model merging is limited to architectures with convolutions and residual connections, in line with prior work (e.g. <cit.>). Extending this framework to other architectures such as transformers is another exciting future direction. § CONCLUSION In this work, we present , an algorithm to merge models trained on different datasets starting from different initializations. We demonstrate that can effectively produce merged models at different points on the compute-performance trade-off curve. We also propose , a variant which can merge models without needing any data from the training domains of the component models, and empirically validate that its performance is comparable to running with data, which widens its applicability to data-scarce regimes. § ACKNOWLEDGEMENT This work is supported by Microsoft Grant for Customer Experience Innovation and the National Science Foundation under grant no. 2019844, 2112471, and 2229876. JH is supported by the NSF Graduate Research Fellowship Program. abbrvnat § EXPERIMENTAL AND IMPLEMENTATION DETAILS In this section, we provide more details about our experiments. We conduct all experiments using PyTorch <cit.>. We use two ImageNet pretrained base models for fine-tuning. One of these is the default from PyTorch, while we pre-train the other starting from random initialization following the same pipeline. For fine-tuning the models on each domain, we use the Adam <cit.> optimizer, and sweep the learning rates logarithmically between [1e-4,1e-1], testing out 4 values for LR. We validate on the validation subset wherever available, and on 10% of the training dataset where an explicit val set is not provided. We use standard image augmentation techniques. Our MoE model has a light-weight router, which is a 3 layer CNN trained to predict which model to use for classifying an image. For finding permutation symmetries, we use the official implementation of Git Re-Basin at https://github.com/samuela/git-re-basinthis url. We also rely on the implementation of ZipIt! for the comparisons in <ref>. For solving the least squares objective for , we use SGD with a batch size of 32, a learning rate of 10^-3. We sample equally from both datasets in each batch for experiments involving data. We run our algorithm for 100 steps, and find that it converges quickly. For , we similarly compute the activations on 100 batches of data for matching and finding the optimal permutations. We also reset batch norm parameters using 100 batches of data from the actual domains for all methods. For evaluations concerning the same label space setting, we ensure that the final model produces a distribution over the output classes. For ZipIt!, we achieve this by ensembling the predictions across multiple task specific heads. on the other hand already produces models with the same output dimensions as the original models. For evaluations on different label spaces, we train a linear probe on the final layer representations for each merged model. We use training data from the target domains to train this linear probe, run Adam with a learning rate of 10^-3, with a batch size of 64 for 50 epochs. §.§ Compute time and cost All our experiments (apart from the pretraining and fine-tuning runs to get the original models) are run on a single RTX 2080 Ti GPU. The first step of our method runs in 2 minutes, with the majority of time devoted to computing the activations. This is commensurate with ZipIt! <cit.> and Git Re-Basin <cit.> The second step takes around 4 minutes, which is similar to RegMean <cit.>. We believe that this can be significantly reduced with better dataloading strategies and more efficient implementation, but that is beyond the scope of this paper. §.§ Computing the layerwise merging ratio Note that k_i can be different for each layer. Given a configuration K={k_i/d_i:i ∈ [L]}, we can model the FLOPs/memory of the merged model as a quadratic function of k_i, which we denote as Footprint(K). For a given relative memory/FLOPs budget B, we want to find K s.t. Footprint(K) ≤ B to maximize the accuracy of a model merged with the configuration K. We scale everything so that B=1 corresponds to the footprint of a single model. This problem is NP-Hard. We propose a relaxation of the problem in order to get an approximate solution. First, we measure the performance of a set of models merged with “leave one out" configurations of K, where for each layer i, we construct K_i^0 = {k_j : k_j = d_i if j=i, 0 otherwise} and K_i^1 = {k_j : k_j = 0 if j=i, d_i otherwise}. K_i^0 corresponds to merging only layer i, keeping all other layers unmerged, and K_i^1 corresponds to merging every other layer while keeping i unmerged. We also compute the accuracies of the fully merged model (denoted by K^0) and the ensemble (denoted by K^1). Then, we approximate the accuracy of any given K with a linear function as Acc(K) = ∑_i=1^L k_i/d_i ((2-B)(Acc(K^1)-Acc(K_i^0)) - (1-B)(Acc(K_i^0) - Acc(K^0))) This approximates the effect of k_i on model performance at budget B by linearly interpolating between the performance with fully merging layer i and keeping it separate. We then propose to solve a quadratically constrained linear program to maximize Acc(K) subject to Footprint(K) ≤ B. This program is non-convex however Gurobi <cit.> is able to solve the program to global optimality in a few seconds. To faithfully compute the performance of the merged model, one would require validation samples from the target domain. However, we empirically observe that using the accuracy of a configuration K on ImageNet is a good proxy for its performance on other merging tasks as well, and we hence use it to compute the layer-wise merging ratio for all our experiments. § ADDITIONAL RESULTS §.§ What to optimize for Least Squares? In <ref>, we propose to solve a least squares problem involving the permuted average activations from each layer of the component models. In <ref>, we demonstrate that this choice is not only natural, but also performs better than other alternatives. It is also interesting to note that the second row in the table corresponds to a permuted version of RegMean<cit.>. This formulation performs better than RegMean, indicating that using permutations is necessary to align features for networks which were differently initialized. Further, row 3 is similar to the objective proposed by <cit.>, but we show that outperforms this objective as well. §.§ Merging models trained on CIFAR and CINIC In this section, we present results on merging ResNet-18 models trained on CIFAR-10 and CINIC-Im datasets. CINIC-Im is a subset of the CINIC-10 <cit.> dataset which does not contain any images from CIFAR-10. We follow the same experimental protocol outlined in <ref> to train two independent models on these datasets from scratch. We then merge the models and report the results for merging models with 1× size of ResNet-18 in <ref>. We make a few surprising observations in this case. We find that randomly permuting the weights of one model and averaging these weights achieves a non-trivial performance, and this is improved by using Git Re-Basin to find the optimal permutation. Using further improves this performance. §.§ Merging ResNet-18 In <ref>, we plot the performance of against baselines for merging pairs of ResNet-18 under the same settings as <ref>. We find that outperforms the baselines at lower FLOPs, and the difference reduces for larger FLOP budgets. §.§ Reducing the accuracy barrier on ImageNet In this section, we show the performance of while merging ResNet-50 models trained independently on ImageNet. The accuracy of a single model on this task is 77.5%. As seen from Fig <ref>, current methods including ZipIt!<cit.> and Git Re-Basin<cit.> struggle on merging models for this task, with the accuracy of the merged model being significantly lower than the accuracy of a single model. This has been referred to as the accuracy barrier on ImageNet in prior work. makes some progress towards lowering this barrier, and improves over Git Re-Basin by over 9% at 1.0× FLOPs budget. For context, this accuracy is at par with that obtained by merging WideResNet-50 models with a width multiplier of 2 using Git Re-Basin. More promisingly, the flexibility afforded by partially permuting and merging models gives another avenue to lower the accuracy barrier, with a model of size 1.4× having an accuracy barrier of 2% with . However, further work is needed to reduce this accuracy barrier. In <ref>, we compare using synthetic data from <cit.> for all purposes of activation computation while merging ImageNet trained models. We find that using with synthetic data can come close to using actual data, being within 1% in terms of accuracy at 1.2× model size. §.§ Detailed Results Each of our evaluation was run across three random restarts. These random restarts shuffle the data used for computing activations and merging the models. They also affect the initialization of the merged model. Each pair evaluation was also run twice, swapping the order of pre-trained models used for either of the datasets of the pair. We hence have 6 runs for each dataset pair. In <ref>, we provide the results for each dataset pair, reporting the average and standard deviation across the 6 runs. §.§ Using task specific heads In <ref>, we report the results computed using the protocol mentioned in <cit.>. We find that outperforms ZipIt! in this evaluation across model budgets. § BROADER IMPACT Advances in model merging, especially through methods which do not require training data, can help further democratize machine learning by helping practitioners improve the capabilities of open source models. However, the risk of merged models inheriting biases of the component models still remains.
http://arxiv.org/abs/2407.03104v1
20240703134144
KeyVideoLLM: Towards Large-scale Video Keyframe Selection
[ "Hao Liang", "Jiapeng Li", "Tianyi Bai", "Chong Chen", "Conghui He", "Bin Cui", "Wentao Zhang" ]
cs.CV
[ "cs.CV", "cs.CL", "cs.MM" ]
     ^Peking University     ^The Open University of China     ^♢Huawei Cloud BU     ^*Shanghai AI Laboratory ^hao.liang@stu.pku.edu.cn, ^jasper_li@alumni.pku.edu.cn, ^{bin.cui, wentao.zhang}@pku.edu.cn, § ABSTRACT Recently, with the rise of web videos, managing and understanding large-scale video datasets has become increasingly important. Video Large Language Models (VideoLLMs) have emerged in recent years due to their strong video understanding capabilities. However, training and inference processes for VideoLLMs demand vast amounts of data, presenting significant challenges to data management, particularly regarding efficiency, robustness, and effectiveness. In this work, we present KeyVideoLLM, a text-video frame similarity-based keyframe selection method designed to manage VideoLLM data efficiently, robustly, and effectively. Specifically, KeyVideoLLM achieves a remarkable data compression rate of up to 60.9 times, substantially lowering disk space requirements, which proves its high efficiency. Additionally, it maintains a 100% selection success rate across all video formats and scales, enhances processing speed by up to 200 times compared to existing keyframe selection methods, and does not require hyperparameter tuning. Beyond its outstanding efficiency and robustness, KeyVideoLLM further improves model performance in video question-answering tasks during both training and inference stages. Notably, it consistently achieved the state-of-the-art (SoTA) experimental results on diverse datasets. KeyVideoLLM: Towards Large-scale Video Keyframe Selection Hao Liang^†, Jiapeng Li^†, Tianyi Bai^*, Chong Chen^♢, Conghui He^*, Bin Cui^, Wentao Zhang^ July 8, 2024 ================================================================================================ [† The first two authors have equal contributions. ] footnote-1 § INTRODUCTION In recent years, with the rapid advancements in large language models (LLMs) <cit.> and multimodal large language models (MLLMs) <cit.>, data management has become a crucial aspect of these technologies <cit.>. At the same time, <cit.> also demonstrates that data processing, selection, and management can significantly influence the performance of MLLMs. Among MLLMs, VideoLLMs achieve competitive performance in traditional multimodal tasks such as visual recognition <cit.>, video understanding <cit.>, and action recognition <cit.>. Moreover, their excellent language understanding capabilities enable strong performance in text-rich tasks, such as video question-answering <cit.> and video-centric dialogues <cit.>. Most existing VideoLLMs focus on modifying model architecture to utilize information from multiple modalities <cit.>. While model effectiveness is crucial, data also significantly impacts the success of VideoLLMs. For instance, <cit.> demonstrate that higher-quality training data can enhance the performance of VideoLLMs. Additionally, <cit.> indicates that LLMs can disrupt data management due to their massive data requirements. However, current video data selection methods primarily emphasize video quality, captions, and video-caption alignment, often resulting in redundant datasets. These methods neglect the importance of efficient and robust data management and face the following three key challenges: C1. Low Efficiency. Due to the large storage requirements of video data, massive training datasets often occupy substantial storage space, ranging from several hundred gigabytes to tens of terabytes <cit.>. Additionally, the common practice of using random or uniform frame selection during training leads to considerable data waste. This inefficiency not only increases storage needs but also hinders the model's ability to learn from the most relevant and informative content within the videos. C2. Low Robustness. Existing keyframe selection methods are sensitive to hyperparameters. For instance, Katna <cit.> and DSNet <cit.> are two previous SoTA methods that require extensive hyperparameter tuning. Moreover, the experimental results in Table <ref> demonstrate their very low success rates on short videos. Additionally, Table <ref> reveals that their keyframe selection speeds are relatively slow. C3. Poor Effectiveness. Typically, VideoLLMs employ uniform or random frame selection methods during the training stage and uniform frame selection methods during the inference stage <cit.>. These uniform or random selection methods do not consider the relevance of frames to the questions and answers. As illustrated on the left of Figure <ref>, the uniform selection method fails to select frames relevant to the question, resulting in incorrect answers. To address these issues, we propose KeyVideoLLM. KeyVideoLLM leverages the power of deep learning models to perform precise keyframe selection, ensuring that the selected frames are highly relevant to the given query and response based on text-video frames similarity scores. Specifically, KeyVideoLLM performs precise keyframe selection which is extremely efficient in both data usage and disk storage. Additionally, it leverages the strong parallel computing capabilities of GPUs and employs a coarse-to-fine keyframe selection process, resulting in very fast selection speeds and high success rates with almost no hyperparameters required. We then use KeyVideoLLM for VideoLLMs training and inference to improve the model's effectiveness. In the training phase, we use KeyVideoLLM based on answer and question-answer similarities to select keyframes more relevant to the answer or the question-answer pair. As shown in Figure <ref>, selecting more relevant frames helps improve model performance, resulting in correct answers. In the inference phase, we employ KeyVideoLLM based on the question to select frames related to the question. As shown in Figure <ref>, more relevant keyframes result in more effective VideoLLMs. The core contributions of this paper are summarized as follows: * New Perspective. Low efficiency and low robustness are significant impediments to the practical adoption of keyframe selection methods. To the best of our knowledge, this study represents the first attempt to address these challenges from a data management perspective. * New Method. We propose KeyVideoLLM, the first text-video frame similarity-based keyframe selection method. Based on the proposed text-video frames similarity scores, KeyVideoLLM can manage VideoLLM data efficiently, robustly, and effectively. * SoTA Performance. (1)High Efficiency. KeyVideoLLM is highly efficient, achieving a data compression rate of up to 60.9 times, significantly reducing disk usage. As shown on the right side of Figure <ref>, it effectively selects frames relevant to the question, mitigating the waste of video data. (2)High Robustness. KeyVideoLLM can achieve selection speeds up to 200 times faster per video. It also achieves the highest keyframe selection success rate compared to previous keyframe selection methods. Unlike existing methods, KeyVideoLLM does not require additional hyperparameter tuning, demonstrating its robustness. (3)Effectiveness in Training and Inference Stage. Our answer and question-answer-based KeyframeLLM improve the performance of VideoLLMs during the training stage compared to uniform frame selection, such as Katna <cit.> and DSNet <cit.>. Besides, our question-based selection method further enhances the performance of VideoLLMs during the inference stage compared to uniform selection, achieving SoTA performance. § RELATED WORK Video Multimodal Models. Recently, inspired by the remarkable understanding capabilities of LLMs and pre-trained models, researchers have started using LLMs to understand videos, achieving SoTA results <cit.>. VideoLLaMA <cit.> is one of the pioneering studies in VideoLLMs, utilizing a visual encoder and a video Q-Former projector to understand videos. However, due to its Q-Former structure, the computational cost is very high. To address this, subsequent works <cit.> adopted the LLaVA <cit.> MLP structure, significantly reducing computational costs while still achieving SoTA performance. Similarly, MiniGPT4Video <cit.> uses an MLP adapter for efficient training. Another notable series of models includes VideoChat, VideoChat2, InternVideo, and InternVideo2 <cit.>. These models utilize an enormous amount of data to train a transformer-structured adapter, achieving SoTA performance. By leveraging large-scale datasets and advanced transformer architectures, these models excel in comprehending and processing multimodal video content, further pushing the boundaries of video understanding capabilities. Keyframe Selection for Video Multimodal Models. VideoLLMs often integrate frame encoding techniques to mitigate resource overhead and streamline training durations. Most VideoLLMs <cit.> employ a uniform sampling methodology to select a fixed number of frames. This approach is also used during the testing phase of InternVideo2 <cit.> and VideoChat2 <cit.>. However, during the training phase, these models opt for random frame selection within each time interval. Some models <cit.> leverage pre-existing compressed video methodologies, such as those facilitated by ffmpeg, to select frames for training purposes. Katna <cit.>, a frame selection method incorporating machine learning techniques, is employed by VideoChatGPT for frame selection. Additionally, certain architectures <cit.> incorporate supplementary modules aimed at reducing the token count encoded per keyframe, thereby improving computational efficiency and avoiding input token constraints. Video-LaVIT <cit.> employs a fusion of keyframes and motion vectors to tokenize video data. These diverse strategies for keyframe management not only impact the computational dynamics of model training and inference but also significantly influence the resultant quality metrics of video-centric LLMs. Data-Centric LLMs and Data Selection Methods The advent of LLMs has led to a substantial increase in the volume of training data <cit.>. VideoLLMs face even higher storage and computational costs due to the vast amount of data and substantial storage space required for video content <cit.>. This increase in data volume also brings new challenges in data management and selection <cit.>. LLM-based methods are commonly used in data selection <cit.>. For instance, <cit.> leverage DeBERTa <cit.> for scoring, retaining high-quality data, and combining it with the k-center greedy algorithm to select diverse data. <cit.> score the accuracy of data using ChatGPT to identify high-quality data. <cit.> use GPT-4 to rewrite data to increase its complexity and then streamline it by reducing its variety and improving its quality. <cit.> train two models using ChatGPT-labeled data to score the quality and complexity of the data. <cit.> rely on ChatGPT to tag each instance, defining its complexity and diversity based on these tags. <cit.> first cluster the data, and then use GPT-4 to select high-quality data for each cluster. § METHOD §.§ Keyframe Selection To the best of our knowledge, this study is the first to select video frames using text-video frames matching for training VideoLLMs. We categorize frame selection methods into three distinct categories, as illustrated in Fig.<ref>. Here, we first summarize Cluster and Video Summarization-based methods. §.§.§ Cluster These methods select the best images from each cluster by first preparing the clusters based on histograms. Katna <cit.> is one of the representatives of these methods. It calculates the histograms for each image and adds them to the histogram list. Then, Katna uses K-means clustering on the histograms to identify the label for each image in the cluster and tag images. The K-means method assigns each frame to the cluster where the nearest center point is located, then updates the center point by recalculating the center point of each cluster. Katna repeats these steps until the cluster center converges or reaches the maximum number of iterations. Afterward, Katna selects the best images from every cluster by choosing the image with the lowest blur (high Laplacian) score. However, the effectiveness of such algorithms largely depends on feature selection and parameter settings. Different settings of the hyperparameters have a relatively large impact on the effectiveness of frame selection. §.§.§ Video Summarization Video summarization technologies aim to create a concise and complete synopsis by selecting the most informative parts of the video content <cit.>. Existing video summarization methods suffer from dynamic visual context and overfitting problems, which can easily lead to incorrect and incomplete video summaries. DSNet <cit.> is one of the representatives of these methods. It consists of feature selection, interest proposal generation, and key shot selection steps. For the feature selection stage, the model selects frame-level features and applies a temporal modeling layer to capture long-range representations. Then, DSNet applies a shared classification and regression module to predict the importance score, center-ness score, and segment boundaries at each temporal location. For testing, segments are refined using the predicted locations and further filtered with non-maximum suppression. Finally, the video summary is generated using a dynamic programming algorithm. §.§ Text-Video Frame Similarity Based Keyframe Selection We propose a frame selection method based on text-video frames matching. The method follows a coarse-to-fine framework, as shown in Fig. <ref>. Given a (video, text) pair, we aim to select frames related to the text content from the video as keyframes. To match semantically similar texts and images, we require a multi-modal embedding space that maximizes the cosine similarity between the keyframe and text embeddings. Inspired by <cit.>, we use a pre-trained CLIP <cit.> model as a backbone. CLIP <cit.> is a model developed by OpenAI that aligns images with textual descriptions in a shared embedding space. We use CLIP as the image and text encoder due to its robust ability to learn and represent both visual and textual data in a shared embedding space. The text encoder must first select appropriate text information to describe a video. In the training stage, we compared two methods for selecting text information. The first method uses only the answers in the conversation, denoted as CLIP-A. The second method uses both the questions and corresponding answers, denoted as CLIP-QA. In the inference stage, only questions are provided, and we use them to select relevant frames, denoted as CLIP-Q. For CLIP-A, CLIP-QA, and CLIP-Q, we use the pre-trained CLIP model's text encoder to map the caption into an embedding space. §.§.§ Coarse Level Keyframe Selection In the coarse frame selection stage, to avoid selecting frames with too small frame spacing and to ensure sample diversity, a uniform sampling method is used to select a number of coarse frames (cn) frames. Specifically, cn is set to 32 frames in this work. §.§.§ Fine Level Keyframe Selection In the fine frame selection stage, we first consider the coarse-level selected video frames as a set ℱ. We take the set ℱ as input and feed it to the image encoder, and each frame in ℱ gets a corresponding visual vector 𝐯. After that, we compute the similarity of these visual vectors 𝐯 with the word embedding to get the similarity score. The similarity score is calculated as follows: score(𝐯_i, 𝐰)=𝐯_i ·𝐰/𝐯_i𝐰, (1) where 𝐯_i is the i-th visual embedding and 𝐰 is the word embedding. Next, we sort these similarity scores and select the top k with the highest scores. After that, we identify the corresponding video frames, select these video frames, and form a set of keyframes at the fine level. Finally, we recombine the collection of frames into a video in the original video's temporal order, so that for each dialog text, there is a unique counterpart consisting of keyframes. The advantages of our approach are: * Compared to clustering methods, our method does not require additional parameter settings. Different settings of the hyperparameters have a relatively large impact on the effect of frame selection, which suggests that our approach is more robust. * Compared to deep learning-based video summarization methods, our method does not require costly video pre-training, leading to higher efficiency. * Compared to the other frame selection methods, our method selects keyframes that are more relevant to the content of the question-answer. Therefore, when the large model uses the keyframes and question-answer selected by our method in the training phase, it receives more accurate supervised information, which improves the understanding of the video. §.§ CLIP-based Keyframe Selection for VideoLLM Training The CLIP-based Keyframe Selection method for training is illustrated in Fig. <ref>. VideoLLMs leverages encoders like LanguageBind<cit.> and CLIP <cit.> to select both spatial and temporal video features. This is accomplished by averaging frame-level features across the temporal and spatial dimensions. Then the features are projected and concatenate with word embeddings for LLMs to understand. The entire training framework is divided into two stages: pre-training and instruction tuning. In the pre-training stage, we use video-text pairs to align vision and text. Similar to most other methods <cit.>, we freeze the parameters of the large language model (LLM) and the visual encoder, training only the projector. This approach allows the projector to endow the LLM with video understanding capabilities without compromising its language abilities. In the supervised instruction tuning stage, the model is tuned using video question-answering datasets. Previous VideoLLMs commonly use random or uniform frame selection <cit.>. In contrast, we introduce a keyframe extraction module. As mentioned in Section <ref>, we use CLIP-A and CLIP-QA to select more relevant frames. The pre-trained model is further fine-tuned using keyframes selected by CLIP-A and CLIP-QA to create high-quality text-video frame pairs. §.§ CLIP-based Keyframe Selection for VideoLLM Inference During the testing (inference) phase, existing benchmarks typically provide questions about the video, which are then answered by VideoLLMs. Due to the presence of a large number of frames in a video, many are redundant or even interfere with video understanding. VideoLLMs struggle to process such massive amounts of frames effectively. To address this, previous VideoLLMs <cit.> commonly use uniform frame selection. However, this method does not focus on the frames relevant to the question. In contrast, we select keyframes based on the question. As mentioned in Section <ref>, we use CLIP-Q to leverage question information to select frames relevant to the question for inference. The selected frames are then used for video question-answering. § EXPERIMENTS In this section, we first introduce the experimental setups. We then aim to answer the following questions to verify the effectiveness, efficiency, and robustness of our proposed KeyframeLLM: Q1: Can our CLIP-A and CLIP-QA methods outperform uniform frame selection and other SoTA keyframe selection methods during the training stage? Q2: Can our CLIP-Q method further outperform uniform frame selection during the inference stage? Q3: How efficient and robust is our CLIP-based method compared to previous methods? Q4: Can our method generalize well across other model architectures? Q5: Can we visualize the advantages of our method? §.§ Experimental Settings Datasets. For training videos, we employ the same pre-training video datasets utilized by Video-LLaVA <cit.>. Additionally, we incorporate the Video Instruction Dataset for Video Instruction Tuning from VideoChatGPT <cit.>, which provides a comprehensive resource of video question-answer pairs. This diverse dataset ensures robust training and instruction tuning of our models. For inference and evaluation, we use well-established video datasets including ActivityNet, MSRVTT, MSVD, and TGIF. These datasets are consistent with those used for evaluation in VideoChatGPT <cit.>, providing a reliable basis for performance comparison and validation of our method. Models. Our VideoLLM experiments utilize the SoTA framework, Video-LLaVA <cit.>. For keyframe selection, we chose the pre-trained CLIP model with a patch size of 32 as the encoder due to its superior performance in aligning visual and textual data. Baselines. We compare the performance of KeyVideoLLM with several baseline keyframe selection methods, including uniform frame selection, Katna, and DSNet. These baselines are selected due to their popularity and previous use in similar research, providing a robust comparative analysis for our proposed method. Settings. For Video-LLaVA, we primarily use the hyperparameters from the official repository. For the CLIP model, we choose CLIP-ViT-B/32 to conduct keyframe selection. For the evaluation, we use LLaMA3 8B <cit.> to rate our results. All experiments are conducted on an 8*A100 NVIDIA GPU machine with a 120-core CPU and 960GB of memory. §.§ Keyframe Selection for Training To address Q1, we compare the performance of KeyVideoLLM (CLIP-A) and KeyVideoLLM (CLIP-QA) with other keyframe selection methods, including uniform selection (Baseline), Katna <cit.>, and DSNet <cit.>, during the supervised instruction tuning stage. These keyframe selection methods are used to select keyframes, which are then utilized to train the model. We employ the VideoLLM framework Video-LLaVA <cit.> for our experiments in this section. For additional experiments, please refer to section <ref>. The results of these comparisons are summarized in Table <ref>. As shown in Table <ref>, KeyVideoLLM (CLIP-QA) consistently outperforms other methods in terms of both score and accuracy across all datasets. This demonstrates that using frames related to both the question and answer can significantly enhance the performance of VideoLLMs during training. KeyVideoLLM (CLIP-A) also achieved strong results, though slightly lower than KeyVideoLLM (CLIP-QA), indicating that the inclusion of answer information is beneficial. Additionally, incorporating more information (question) for keyframe selection yields better outcomes. Katna and DSNet are keyframe selection methods that focus on general key information without specific relevance to the question. These methods do not consistently outperform the baseline, suggesting that non-question-aware keyframe selection methods do not provide a significant advantage over uniform selection. §.§ Keyframe Selection for Inference To address Q2, in this section, we compare the performance of KeyVideoLLM (CLIP-Q) with uniform selection in the inference scenario. Using the models trained in Section <ref>, we fix the model and apply KeyVideoLLM (CLIP-Q) to select frames relevant to the question for VideoLLMs inference. The results of KeyVideoLLM (CLIP-Q) are summarized in Table <ref>. We then compare our approach with uniform frame selection for inference, as shown in Table <ref>. As illustrated in Table <ref>, KeyVideoLLM (CLIP-QA) continues to outperform KeyVideoLLM (CLIP-A), Katna, DSNet, and the baseline. This consistency indicates the effectiveness of our method. Additionally, KeyVideoLLM demonstrates superior performance compared to uniform keyframe selection during inference. By simply changing the keyframes used for inference, our model's performance improves compared to the results shown in Table <ref>. Notably, the performance increase is substantial for the ActivityNet and MSVD datasets, which consist of longer videos. Longer videos present a greater challenge for uniform selection to capture frames relevant to the question, hence the more significant performance boost with our method. Conversely, the improvement is relatively lower for the TGIF and MSRVTT datasets, which contain shorter videos. Furthermore, employing keyframe selection during both the training and inference stages enables the achievement of SoTA results. By focusing on the most relevant frames, KeyVideoLLM reduces data redundancy and enhances effectiveness, leading to superior model performance. §.§ Efficiency and Robustness Analysis To address Q3, we analyze the efficiency and robustness of KeyVideoLLM by comparing it with other keyframe selection methods and a baseline method (without keyframe selection). Our analysis focuses on three key aspects: compression ratio, selection success rate, and selection speed. 1. Highest Compression Ratio To quantify the compression ratio achieved by KeyVideoLLM, we use the following formula: Compression Ratio = S_orig/S_comp, (2) where S_orig represents the total size of the video data before applying keyframe selection, and S_comp represents the total size of the video data after applying keyframe selection. A higher compression ratio indicates a more efficient compression method, as it means the model can reduce the data size more significantly while maintaining the necessary information for effective video question-answering. As shown in Figure <ref>, KeyVideoLLM achieves the highest compression ratios compared to Katna and DSNet across five different datasets. The graph illustrates that our model (CLIP-A and CLIP-QA) significantly reduces data size (up to 60 times) while preserving essential information, demonstrating superior computational and storage efficiency. Higher compression ratios indicate more efficient data usage, making our approach highly effective for large-scale video processing tasks. 2. Highest Success Rate Our method also achieves the highest selection success rate across all datasets, as shown in Table <ref>. It consistently outperforms Katna and DSNet, demonstrating its reliability and accuracy in various video scenarios. 3. Fastest Selection Speed Finally, our method boasts the fastest keyframe selection speed, as detailed in Table <ref>. The selection speed (measured in seconds per video) highlights the efficiency of KeyVideoLLM in processing large volumes of video data quickly. This speed advantage further solidifies the practicality of our approach in real-world applications where time efficiency is critical. §.§ Generalizability of KeyframeLLM To address Q4, following the experiment results in <ref>. In this section, we provide additional experimental results to further validate the effectiveness of KeyframeLLM. Specifically, we investigate the impact of using different encoder architectures for our VideoLLM. In the main experiments, we utilized the VideoLLM Video-LLaVA <cit.> as our model. To explore the robustness and generalizability of our keyframe selection methods, we conducted supplementary experiments by replacing the encoder architecture with CLIP <cit.>. We followed the same experimental setup as described in Section <ref> and Section <ref>. We used the CLIP encoder to process the keyframes selected by our methods and trained the VideoLLM accordingly. The datasets used for training and evaluation remain unchanged: VideoChatGPT is used for training, while ActivityNet, MSRVTT, MSVD, and TGIF are used for evaluation. §.§ Qualitative Evaluation To address Q5, in this section, we provide a qualitative evaluation of our method to demonstrate its effectiveness in video selection tasks. As shown in Figure <ref>, we present a comparison between the baseline response, which is generated by uniform frame selection, and the response generated by our KeyVideoLLM (CLIP-Q) model. In Figure <ref>, the question is: "Is the person in the white coat wearing a hat?" The baseline model, due to uniform frame selection, captures only a vague frame, leading to the incorrect response: "Yes, the person in the white coat is wearing a hat." In contrast, our KeyVideoLLM (CLIP-Q) selects a clear and relevant frame, allowing the model to correctly identify that the person in the white coat is not wearing a hat, thus providing the accurate response: "No, the person in the white coat is not wearing a hat." We provide an additional qualitative evaluation to further demonstrate our model's effectiveness in video selection tasks. As shown in Figure <ref>, we compare the baseline response, generated by uniform frame selection, with the response generated by our KeyVideoLLM (CLIP-Q) model. In the example depicted, the question is: "What color are the person's clothes in the video?" The baseline model, which selects frames uniformly, fails to provide relevant information, resulting in the response: "The video does not provide information about the color of the person's clothes." In contrast, our KeyVideoLLM (CLIP-Q) model accurately identifies keyframes relevant to the question, providing the correct response: "The person in the video is wearing a black shirt and shorts." These qualitative analyses underscore the superior performance of KeyVideoLLM in understanding and selecting relevant keyframes for accurate video question-answering. The model's ability to leverage Answer and Question-Answer pairs for keyframe selection significantly enhances its accuracy and reliability compared to traditional methods. § CONCLUSION VideoLLMs are emerging as powerful deep learning models designed for video question-answering tasks. Efficient, robust, and effective keyframe selection algorithms are essential for training VideoLLMs, but they remain challenging due to their inherent complexity. This paper presents KeyVideoLLM, a new approach to select keyframes for VideoLLMs by leveraging the text-video frames similarity scores. Experimental results on diverse datasets indicate that KeyVideoLLM significantly improves the performance of VideoLLMs during both the training and inference stages. Furthermore, it consistently outperforms the compared baseline methods in terms of efficiency, effectiveness, and robustness. ACM-Reference-Format
http://arxiv.org/abs/2407.03274v1
20240703170047
Using Photoplethysmography to Detect Real-time Blood Pressure Changes with a Calibration-free Deep Learning Model
[ "Jingyuan Hong", "Manasi Nandi", "Weiwei Jin", "Jordi Alastruey" ]
eess.SP
[ "eess.SP" ]
Using Photoplethysmography to Detect Real-time Blood Pressure Changes with a Calibration-free Deep Learning Model Jingyuan Hong, Manasi Nandi, Weiwei Jin, and Jordi Alastruey This work was supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership Grant [EP/T517963/1] and by the British Heart Foundation [PG/17/50/32903]. Jingyuan Hong is with the Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, King’s Health Partners, London SE1 7EU, U.K. (e-mail: jingyuan.hong@kcl.ac.uk). Manasi Nandi is with the School of Cancer and Pharmaceutical Science, King’s College London, King’s Health Partners, London SE1 7EU, U.K. (e-mail: manasi.nandi@kcl.ac.uk). Weiwei Jin is with the Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, King’s Health Partners, London SE1 7EU, U.K. (e-mail: weiwei.jin@kcl.ac.uk). Jordi Alastruey is with the Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, King’s Health Partners, London SE1 7EU, U.K. (e-mail: jordi.alastruey-arimon@kcl.ac.uk). July 8, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Blood pressure (BP) changes are linked to individual health status in both clinical and non-clinical settings. This study developed a deep learning model to classify systolic (SBP), diastolic (DBP), and mean (MBP) BP changes using photoplethysmography (PPG) waveforms. Data from the Vital Signs Database (VitalDB) comprising 1,005 ICU patients with synchronized PPG and BP recordings was used. BP changes were categorized into three labels: Spike (increase above a threshold), Stable (change within a ± threshold), and Dip (decrease below a threshold). Four time-series classification models were studied: multi-layer perceptron, convolutional neural network, residual network, and Encoder. A subset of 500 patients was randomly selected for training and validation, ensuring a uniform distribution across BP change labels. Two test datasets were compiled: Test-I (n=500) with a uniform distribution selection process, and Test-II (n=5) without. The study also explored the impact of including second-deviation PPG (sdPPG) waveforms as additional input information. The Encoder model with a Softmax weighting process using both PPG and sdPPG waveforms achieved the highest detection accuracy—exceeding 71.3% and 85.4% in Test-I and Test-II, respectively, with thresholds of 30 mmHg for SBP, 15 mmHg for DBP, and 20 mmHg for MBP. Corresponding F1-scores were over 71.8% and 88.5%. These findings confirm that PPG waveforms are effective for real-time monitoring of BP changes in ICU settings and suggest potential for broader applications. Photoplethysmography, Blood pressure monitoring, Deep learning classification model § INTRODUCTION Changes in blood pressure (BP) are critical for understanding physiological conditions in both clinical and non-clinical settings. Real-time changes provide direct insights into the current health status of individuals. In a clinical setting, acute and severe changes in resting BP need for prompt alerts to facilitate timely medical intervention <cit.>. Furthermore, continuous BP monitoring allows for the evaluation of surgical risks <cit.> and the tracking of postoperative complications <cit.>. In daily settings, monitoring BP changes during various activities aids in cardiovascular disease prevention, assessing sleep quality, and supervising exercise across diverse populations <cit.>. Three primary methods are employed to measure BP changes. Clinically, the most accurate involves invasive BP monitoring using catheters inserted into arteries, which allows for real-time BP waveform capture <cit.>. Despite the precision of this method, it can cause considerable discomfort and pose health risks to patients <cit.>. For less invasive alternatives, clinicians use non-invasive continuous monitoring devices or perform multiple measurements with a cuff-based BP monitor <cit.>. While these methods reduce patient discomfort, their reliability and accuracy can be challenged <cit.>. In nonclinical environments, digital BP monitors are commonly used for home monitoring <cit.>. These devices measure BP at specific intervals to detect fluctuations. However, they are unable to provide accurate or continuous beat-to-beat BP readings, limiting their ability to capture sudden BP changes <cit.>. An emerging alternative is the use of Photoplethysmography (PPG) signals. PPG, a technique that detects changes in peripheral blood volume, offers a novel method for predicting cardiovascular events <cit.>. Although the use of PPG signals to estimate absolute BP values has been explored for many years, developing models with high accuracy has been challenging, and most models require a moderate amount of the subject's baseline signal data for personalized calibration together with demographic information such as the subject's age <cit.>. However, rather than estimating absolute BP values, the categorical prediction of BP changes provides a promising pathway towards unobtrusive detection using continuous PPG signals based on the relationship between PPG and change of BP <cit.>. This study aims to detect BP changes from PPG signals using time-series classification deep learning models. Four models were trained and tested on 1,005 ICU patients from the Vital Signs Database (VitalDB), which includes time-aligned systolic (SBP) and diastolic (DBP) BP values together with continuous PPG signals for each patient. BP changes were categorized into significant increases, normal range changes, and significant decreases, with thresholds set from ±5 mmHg to ±45 mmHg in ±5 mmHg increments. Additional analyses were conducted using the Encoder model, which outperformed the other three models studied. § METHODOLOGY §.§ Data VitalDB contains continuous PPG and BP signals both with a sampling rate of 125 Hz for 2,938 ICU patients <cit.>. In this dataset, finger PPG and left radial BP waveforms were measured by patient monitors (Tram-Rac 4A, GE healthcare) over periods ranging from 10 seconds to 10 hours <cit.>. For convenience, PPG and BP for each patient are provided split into non-overlap 10-second segments. In this study, only patients with recording times longer than 30 minutes were included, resulting in 2,131 patients, with characteristics shown in Table <ref>. §.§ Calculation of BP changes Changes in three types of BP were calculated. Single SBP and DBP values were calculated as the mean SBP and DBP from each 10-second BP recording. The MBP value was then calculated from the SBP and DBP values using the formula (SBP + 2DBP)/3 <cit.>. For each patient, changes in each of the three BP types between two time points, Δ BP, were calculated as: Δ BP = BP_i+j-BP_i i= [1,2,3,...,N-1], j=[1,2,3,...,N-i] where i is the index of an initial BP reading, j is the number of subsequent readings after i at which the BP was measured, and N is the total number of BP readings. This formula captures BP changes over varying time intervals for each patient. BP changes were categorized into three labels: "Spike", "Stable", and "Dip". These labels were defined as a significant increase, maintenance within a normal range, and a significant decrease, respectively. Initial thresholds for these changes were set at 30 mmHg for SBP, 15 mmHg for DBP, and 20 mmHg for MBP based on the 75% confidence interval of calculated changes in SBP, DBP, and MBP distributions. Increases greater than 30 mmHg for SBP, 15 mmHg for DBP, or 20 mmHg for MBP were classified as Spike. Changes within ±30 mmHg for SBP, ±15 mmHg for DBP, or ±20 mmHg for MBP were classified as Stable. Decreases greater than 30 mmHg for SBP, 15 mmHg for DBP, or 20 mmHg for MBP were classified as Dip. These thresholds were adjusted for further analysis to a range from 5 to 45 mmHg for SBP, 5 to 35 mmHg for DBP, and 5 to 40 mmHg for MBP, in 5 mmHg increments each. The maximum thresholds for the three BP types differ due to insufficient data, with the frequency of changes exceeding 45 mmHg in DBP and MBP being lower than in SBP. An example of PPG signals with different SBP labels is shown in Fig. <ref>. §.§ Classification models Detecting BP change categories from PPG waveforms is a typical time-series classification task, for which four deep learning models were used in this study <cit.>. Multi-layer perceptron (MLP) is a basic deep learning architecture. In this study, the MLP model comprised four fully connected layers (Fig. <ref>a). The first layer flattened all input channels, while the following layers each consisted of 500 neurons with a parametric rectified linear unit (PReLU) activation function and a dropout layer. The final layer classified the outputs into three categories. Convolutional neural network (CNN) models are effective for extracting deep features from time-series data. In this study, the CNN model included three convolution blocks (Fig. <ref>b), each followed by an instance normalization layer and a PReLU activation function, a global average pooling layer, and a final fully connected layer that classified into three categories. Residual network (ResNet) modifies a traditional CNN model by adding shortcut residual connections between convolutional layers, which helps avoid the vanishing gradient problem and improves the ability of the network to learn from deep architectures. In this study, the ResNet model consisted of three sequential residual blocks (Fig. <ref>c), each containing three CNN blocks (a conventional layer, an instance normalization layer, and a PReLU activation function), followed by a global average pooling layer and a final fully connected layer. Encoder (from Transformer architecture) combines a CNN model with an attention mechanism. In this study, the Encoder model contained three CNN blocks (each with a conventional layer, an instance normalization layer, a PReLU activation function, and a dropout layer), an attention mechanism, and a final fully connected layer (Fig. <ref>d). The attention mechanism used a Softmax weighting process, starting by applying the Softmax function to the output of the last CNN block to generate a set of normalized weights. These weights then scaled the corresponding features, emphasizing informative parts and diminishing less relevant ones. The weighted features were summed to produce an attention-augmented output, helping the model focus on specific and relevant parts of the time-series inputs. The architecture details and optimization hyperparameters of these four models are summarized in Table <ref>. All models in this study were built using the Pytorch framework and trained on an Intel Xeon W-2195 CPU (2.3 GHz) with an NVDIA Tesla V100 GPU. §.§ Input data types Given the strong correlation between the second-derivative PPG (sdPPG) waveform and BP <cit.>, this study compared three input combinations to evaluate the impact of incorporating the sdPPG waveform in model training. Each combination was applied to detect changes in SBP, DBP, and MBP. The first input type included only the PPG waveform (PPG-waveform). This input type provided PPG waveforms at time steps i and i+j (Fig. <ref>b) and served as the basic input for evaluating the classification performance of the proposed four models. The second input type integrated five features were derived from the sdPPG waveform and added them to the PPG-waveform input (Waveform-feature). This input type included the PPG waveform and the extracted sdPPG features at time steps i and i+j (Fig. <ref>b). The definition of these five features is shown in Table <ref> and their selection process is detailed in Appendix <ref>. The third input type included the PPG waveform along with the sdPPG waveform (PPG-sdPPG-waveform). Both the PPG and sdPPG waveforms were provided at time steps i and i+j (Fig. <ref>b). Min-max normalization was applied to the sdPPG waveform to match its amplitude range with the primary PPG input. To standardize all PPG waveforms, the original 10-second PPG recordings were truncated to 7 seconds to ensure each signal began with a complete cardiac cycle. From this 7-second baseline, the impact of input length on detection performance was evaluated by testing the Encoder model with PPG-sdPPG-waveform input at 3, 5, and 7-second lengths. Besides the PPG input, the BP value at time step i was also fed into the model as an extra input (Fig. <ref>c). An ablation study was conducted for the Encoder model with the PPG-sdPPG-waveform input type to assess the necessity of using initial BP values as an extra input for training, and quantifying the effects of removing the initial BP on classification outcomes. §.§ Training, validation, and test The training, validation and test pipeline for all models and input types is illustrated in Fig. <ref>. To manage resource constraints and avoid over-fitting, we randomly selected 500 patients from the total of 2,131 patients (Fig. <ref>a). A sampling process was then implemented to ensure a uniform distribution by selecting an equal number of segments across the three categories (Table <ref>): Spike (900,000 segments), Stable (900,000 segments), and Dip (900,000 segments). This selection aimed to train the model without bias towards any of the three output categories. Two test datasets were created as follows (Fig. <ref>a): Test dataset I (Test-I) involved selecting 500 patients from the total of 2,131 patients, excluding those in the training and validation dataset. From all test segments, 144,500 segments were selected to match the uniform distribution sampled in the training and validation dataset (Table <ref>). Test dataset II (Test-II) involved selecting 5 patients from the total of 2,131 patients, excluding those from the training and validation dataset and Test-I dataset, and using all their segments (Table <ref>). The characteristics of these datasets are shown in Table <ref>. For the training and validation dataset, 80% was allocated for training and 20% for validation, with a five-fold cross validation method applied to the training process of all models. The model with the best performance was retained for further analysis. Each model was trained using the PPG-waveform input type as described in Section <ref> (Fig. <ref>b). Furthermore, the Encoder model (best performer) was trained and tested for the three input types described in Section <ref> (Fig. <ref>b). For each input type, the Encoder was trained with or without the initial BP value at time step i. This value was incorporated into the model as supplementary information, using a linear layer to concatenate it with the feature map from the last layer before inputting it into each layer of the model (Fig. <ref>c). The model produced three output categories that were evaluated for detection results using accuracy and F1-score metrics (Fig. <ref>d), as described next. §.§ Evaluation metrics Cross-entropy was employed as the loss function to transform the true label probability predicted by the model into a loss value. This approach aimed to minimize the loss value, ensuring that the predictive probability distribution of the model closely approximates the true label distribution, ℒ=H(p,q)=∑^C_x=1p(x)log q(x) where ℒ is the value of the cross-entropy loss function, H denotes the cross-entropy used to measure the discrepancy between the two probability distributions, x is an index representing different categories, C represents the total number of categories, p(x) is the true label probability for class x, and q(x) is the predicted probability for class x. Two metrics were used for evaluating the classification results of the proposed models. Accuracy measured the proportion of samples that were correctly predicted by the model relative to the total number of samples. F1-score served as a balanced indicator to assess both the accuracy and robustness of the model, considering both precision and recall. The mathematical definitions of these metrics are as follows: Accuracy=TP+TN/TP+TN+FP+FN Precision = TP/TP+FP Recall = TP/TP+FN F1score = 2×Precision× Recall/Precision+Recall F1score_total = 1/C∑^CF1score where TP, TN, FP, and FN represent the counts of true positives, true negatives, false positives, and false negatives, respectively. § RESULTS §.§ Model evaluation The Encoder model achieved the highest accuracy and F1-score values across the three BP types for the two test datasets studied (Table <ref>). Changes in MBP were detected more accurately compared to changes in SBP and DBP in both Test-I and Test-II datasets. All models and pressure types showed improved performance on the Test-II dataset compared to the Test-I dataset. Overall, the Encoder model considerably improved detection performance. Compared to the MLP model on the Test-I dataset, BP changes detection accuracy increased by ≥5.2% and the F1-score by ≥5.1% for all pressure types. On the Test-II dataset, these increases were ≥3.7% in accuracy and ≥2.6% in F1-score. §.§ Input evaluation The classification performance of the Encoder model for both test datasets improved when the sdPPG waveform was used as an additional input (PPG-sdPPG-waveform), compared to using only the basic PPG waveform (PPG-waveform) or the PPG waveform combined with five sdPPG features (Waveform-feature). The PPG-sdPPG-waveform input type yielded the highest accuracy and F1-score for detecting changes in all BP types on Test-I and in SBP and DBP on Test-II. Consequently, the PPG-sdPPG-waveform input was selected for subsequent analysis due to its superior performance. Using the initial thresholds for changes in SBP, DBP and MBP based on the 75% confidence intervals, most variations in the reference measured BP when setting the initial BP at time step zero fell within the ‘Stable’ range (green area). This was the most frequent classification label detected by the Encoder model with the PPG-sdPPG-waveform input on the Test-II dataset (Fig. <ref>). The model accurately detected the Spike and Dip labels when some portion of the reference BP sudden increased or decreased beyond the thresholds for all BP types. However, it occasionally mislabeled changes, especially when BP fluctuations were within the set thresholds. Fig. <ref> shows the results for one of the five patients in Test-II dataset, and Supplementary Material Figs. <ref> to <ref> show similar correspondences between reference values and detected BP changes labels for the remaining four patients at the same thresholds. All patients showed similar trajectories for the three BP reference values. A decrease in the length of the PPG input for the Encoder model, with a PPG-sdPPG-waveform input type, reduced the detection accuracy and F1-score across all BP types on Test-I by less than 1.1% and 3.2%. However, these metrics increased up to 1.5% and 0.8%, respectively, on Test-II (Table <ref>). The ablation of the initial BP value for the Encoder model with the PPG-sdPPG-waveform input type decreased the classification accuracy and F1-score across all BP types by less than 0.9% and 1.0%, respectively, for the Test-I dataset, and 1.5% and 1.3%, respectively, for the Test-II dataset (Table <ref>). §.§ Threshold evaluation The accuracy of detection labels for changes in SBP, DBP and MBP produced by the Encoder model with PPG-sdPPG-waveform input decreased with increasing thresholds on the Test-I dataset, while it increased on the Test-II dataset (Fig. <ref>). On the Test-I dataset, accuracy peaked at about 75% with a 5-mmHg threshold, but decreased to around 60% at higher thresholds (45 mmHg for SBP, 35 mmHg for DBP, and 40 mmHg for MBP). Conversely, the model showed poor classification results at a 5-mmHg threshold on Test-II, with an average accuracy of 60% for all BP types, while the detection accuracy increased and approached 100% at the maximum thresholds. It is noteworthy that, since there was insufficient data on changes in DBP and MBP greater than 35 mmHg and 40 mmHg, respectively, the maximum threshold for DBP was tested at 35 mmHg and for MBP at 40 mmHg. The F1-score, varied similarly to detection accuracy across thresholds for all BP types (Supplementary Material Fig. <ref>). Supplementary Material Figs. <ref> to <ref> compared classified BP changes for all five patients in the Test-II dataset with reference values, under varying threshold settings. The proportion of classified labels that fell within the ‘Stable’ range increased with the increasing threshold, for all BP types. For all threshold values, the model accurately detected the Spike and Dip labels when the reference BP suddenly increased or decreased beyond the thresholds for all BP types, detecting changes in blood pressure earlier the decreasing threshold values. Mislabeled changes often occurred when BP fluctuations were within the set thresholds, but the model could recognize the direction of the fluctuation even if it did not exceed the threshold value. § DISCUSSION Our study developed a calibration-free classification model that uses only the PPG signal and an initial BP value to label changes in SBP, DBP, and MBP over hours, thereby achieving the goal of real-time BP monitoring. This project involved a three-category classification task with time-series input data. Initially employing a multi-layer perceptron (MLP) classification accuracies and F1-scores were above 62% for all BP types across two test datasets (Test-I and Test-II), suggesting a correlation between PPG morphology and changes in BP. More complex models, including convolutional neural network (CNN), residual network (ResNet), and Encoder model based on the Softmax weighting process, improved performance. The Encoder model, in particular, showed significant improvements due to its ability to focus on highly relevant parts of the time-series data. As a result, only this model was further studied to assess the effect of different input types, ablation of initial BP values, and threshold values. Incorporating second-derivative PPG (sdPPG) waveforms further improved the classification results indicating that sdPPG contains valuable information related to BP changes. This aligns with previous findings that sdPPG is useful for accurately estimating estimating absolute BP <cit.> and assessing vascular aging and arterial stiffness <cit.>. Directly inputting waveform data into the first CNN block of the Encoder model proved more effective than manually extracting features and feeding them into each layer of the model through linear layers. The ablation study showed that incorporating initial BP values helps improve detection accuracy, consistent with previous research on absolute BP estimation <cit.>. Contrasting trends in detection accuracy with increasing threshold values between the Test-I and Test-II datasets were attributed to their different data sampling strategies. During the sampling process for the training and validation dataset, patient selection was adjusted based on changes in thresholds to ensure the same volume of data for each classification label. The Test-I dataset used uniform distribution sampling to the training and validation dataset, resulting in a narrow Stable region with small thresholds. Consequently, the narrow Stable region exhibited more consistent or similar patterns of BP changes, making it easier to differentiate from Spike or Dip states, improving classification outcomes. In contrast, the Test-II dataset did not involve selective sampling and utilized the complete set of patient data, resulting in a distribution of BP changes that approximated a normal curve, with most data centred in the middle. When the threshold was increased, enlarging the Stable range, most of the data fell within this expanded Stable range. This shift allowed the same model to achieve better classification performance, as most of the data were classified as Stable. Compared to the threshold settings of BP changes used in this study, various real applications employed similar thresholds to identify significant BP changes. In clinical settings, ICU patients with acute severe hypertension (SBP/DBP > 180/110 mmHg) were recommended to have their SBP reduced by no more than 25% within the first hour, aiming for 160/100–110 mmHg over the next 2–6 hours <cit.>. Therefore, significant BP changes are defined as at least 20 mmHg for SBP and 10 mmHg for DBP for effective monitoring. Similarly, for acute ischemic stroke patients, a decrease in SBP of more than 26 mmHg within 4 hours after admission is an important indicator for evaluating long-term outcomes <cit.>. Outside clinical settings, a 20-mmHg increase in nighttime SBP has been linked to increased cardiovascular disease (CVD) event risk <cit.>, highlighting the broader implications of detecting BP changes for preventing CVDs. Overall, the thresholds for BP changes slightly vary across different contexts but generally fall within a 30-mmHg range for SBP, or are dynamically adjusted based on baseline BP to fulfill detection requirements. The Encoder model could accurately detect BP changes for all BP types, despite occasional lag errors and threshold misjudgements compared to the actual measured BP values. When the BP suddenly dropped below the setting threshold and then returned to the Stable range, the model outputted the Dip state after the BP had returned to Stable. This could be due to the hysteresis effect between PPG and BP <cit.> or measurement errors in PPG <cit.>. Threshold misjudgement occurred because the model effectively detected sudden changes in BP but was less accurate in estimating the absolute BP values. As a result, the model outputted Spike or Dip states even when BP values had not actually reached the threshold. The trained Encoder model detected BP changes across patients without a calibration process involving demographic data, such as age and sex, or PPG data from the test datasets to fine-tune the trained model, departing from previous studies <cit.>. This suggests that PPG signals contain BP change information unaffected by demographic factors and, therefore, studies on estimating BP values using PPG can further explore the potential of calibration-free BP estimation models. This study has several limitations that need to be addressed in future research. Due to computational limitations, only a portion of patients from the entire VitalDB dataset was used for training, validation, and testing. Although the sampling process was randomized, more accurate results might have been obtained if the entire dataset had been used for the model training. In addition, VitalDB only recorded physiological signals in ICU patients, meaning that most BP changes were induced by medical interventions or the patients' underlying diseases. Therefore, the applicability of the model trained on this dataset to the healthy population requires further investigation. All models proposed in this study were suitable for time-series classification tasks and have relatively simple structures. It is worth noting that the aim of this study was not to develop a novel model specifically for real-time detection of BP changes; instead, the goal was to use existing models to achieve this purpose. The codes for training and testing all models and generating all datasets are freely available on GitHub [link will be added if the paper is accepted]. § CONCLUSION This study developed an Encoder-based model that uses only PPG signals and an initial BP value to continuously and in real-time classify BP changes in ICU patients, without individual calibration. It achieved high accuracy in detecting changes in SBP, DBP, and MBP, demonstrating potential for real-time clinical BP monitoring. The model's simple architecture allows for future investigations of more complex time-series classification models. Testing on broader datasets, including healthy cohorts, is needed to assess wider applicability. § The feature selection for the second-derivative PPG (sdPPG) was conducted using an in-silico dataset comprising 4,374 virtual subjects <cit.>. The PulseAnalyse algorithm was employed to extract 40 features from the in-silico PPG, first-derivative PPG (dPPG), and sdPPG signals <cit.>. These features were analyzed for correlation with systolic blood pressure (SBP) to rank their correlation coefficients. The top five correlated features, all derived from sdPPG, were selected for the input analysis. The details of these selected features are presented in Table <ref>. § REFERENCES ieeetr
http://arxiv.org/abs/2407.02028v1
20240702075230
Why does in-context learning fail sometimes? Evaluating in-context learning on open and closed questions
[ "Xiang Li", "Haoran Tang", "Siyu Chen", "Ziwei Wang", "Ryan Chen", "Marcin Abram" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.IR", "cs.LG" ]
LiDAR-based HD Map Localization using Semantic Generalized ICP with Road Marking Detection Yansong Gong, Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang^* Yansong Gong (yansong.gong@uisee.com), Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang (corresponding author, dan.zhang@uisee.com) are with UISEE Technology (Beijing) Co., Ltd. This version of the manuscript has been accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024). July 8, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We measure the performance of in-context learning as a function of task novelty and difficulty for open and closed questions. For that purpose, we created a novel benchmark consisting of hard scientific questions, each paired with a context of various relevancy. We show that counter-intuitively, a context that is more aligned with the topic does not always help more than a less relevant context. This effect is especially visible for open questions and questions of high difficulty or novelty. This result reveals a fundamental difference between the treatment of close-form and open-form questions by large-language models and shows a need for a more robust evaluation of in-context learning on the variety of different types of questions. It also poses a new question of how to optimally select a context for large language models, especially in the context of Retrieval Augmented Generation (RAG) systems. Our results suggest that the answer to this question can be highly application-dependent and might be contingent on factors including the format of the question, the perceived difficulty level of the questions, and the novelty or popularity of the information we seek. § INTRODUCTION Despite their indisputable successes <cit.>, Large Language Models (LLMs) often struggle to answer challenging questions <cit.>. While they can achieve superhuman accuracy on many benchmarks <cit.>, they also suffer from hallucinations <cit.>, lack of coherence <cit.>, and are prone to cognitive errors <cit.>. To make the difficult situation even worse, it is not always easy to detect mistakes committed by LLMs since their responses are often presented in a way that emulates correct and coherent answers <cit.>. For practical reasons, many existing benchmarks only test the ability to answer either closed <cit.> or easy-to-verify questions, e.g., regarding common knowledge <cit.> or questions that can be algorithmically verified <cit.>. Another challenge concerns domain generalization and domain shift problems, resulting in the need to constantly update your machine learning models to account for the evolution of various trends in your data <cit.>. However, improving the performance of pre-trained LLMs for specific tasks by fine-tuning is both expensive <cit.> and technically challenging <cit.>. While some techniques like Low-Rank Adaptation (LoRa) can reduce the cost of training <cit.>, it does not solve the main issue, namely, how to allow LLMs to leverage new pieces of information that were not a part of the initial training corpus <cit.>. One approach to the issue might be in-context learning <cit.>, where LLMs effectively learn to solve a given problem leveraging a limited number of examples without updating the model parameters. Namely, in-context learning incorporates question-solution pairs in the input prompt, allowing LLMs to detect the logic and patterns of those examples, subsequently improving the LLMs output accuracy. It enables LLMs to acquire new knowledge in the inference time and utilize it in subsequent responses. This technique significantly reduces the complexity of improving the LLMs performance compared to alternative approaches such as fine-tuning <cit.>. It should also be noted that the effectiveness of the popular Retrieval-Augmented Generation (RAG) techniques relies heavily on the strength of in-context learning <cit.>, as discussed later. In this paper, we focused on the question of how various types of context improve the effectiveness of in-context learning when answering challenging questions. We noticed a surprising behavior. Namely, depending on the difficulty and novelty of the question, and depending on the fact whether the question is of the open or closed type, the relation of the measured performance of the model to both the perceived and quantified relevancy of the context varies. Notably, the measured in-context learning performance of GPT-4 was positively correlated to context relevancy in two benchmarks with closed-form questions but negatively correlated in our benchmark with open-form questions, indicating different utilization of context depending on the form of the received questions. In the next sections, we introduce our novel dataset, which comprises 160 unique question-response pairs from the fields of physics and computer science with varying levels of difficulty. For the purpose of evaluation, each question is accompanied by one of four types of context (including no context to serve as a control group) and paired with a generated answer from GPT-4. In the subsequent sections, we detail our grading scheme and present the results aggregated from each of our graders. Next, we compare our findings with the existing work by <cit.>, highlighting a notable discrepancy in the measured effectiveness of the context. To elucidate this difference, we delve deeper into the nature of the problem, discovering that the main impact comes from the open or closed form of the questions, with additional effects related to the difficulty or novelty of those queries. To further strengthen our analyses, we then compare the performance improvement associated with in-context learning across a range of context relevancy using two additional close-ended question datasets, MetaICL <cit.> and NephSAP <cit.> and we contrast the results with our findings harvested with the help of our open-ended question dataset. Following this, in the discussion section, we discuss the impact of our work, especially in the context of the RAG systems, future research directions, and other methods that enhance LLM performance § RELATED WORK Large Language Models. LLMs have shown remarkable capabilities in various tasks, including code generation <cit.>, text summarization <cit.>, and database query optimization <cit.>. They demonstrate a surprising ability to perform in-context learning, where an LLM “learns” to do a task simply by conditioning on a prompt containing input-output examples, achieving state-of-the-art (SOTA) results on various benchmarks. However, there has been little understanding of how the model leverages the context and what makes in-context learning work. In addition, their performance significantly depends on the contextual information provided and, as discussed in this paper, on the form and type of the queries. In-Context Learning. In-context learning has been a focal point in recent research. Unlike tra­ditional fine-tuning methods, in-context learning adapts models to unseen tasks by incorporating examples directly into the input context, as highlighted by <cit.>. <cit.> discussed how in-context learning can be understood as implicit Bayesian inference, where models infer latent concepts to generate coherent responses. Techniques such as chain-of-thought prompting <cit.> have shown significant improvements in reasoning tasks. Recent frameworks like OpenICL <cit.> have further streamlined the implementation of in-context learning by providing unified and flexible tools for integrating various retrieval and inference methods. Many recent research focuses on the example selection strategies of in-context learning. One of the most common strategies is to select examples for demonstration based on similarity in the embedding space <cit.>. In-context learning seems robust to label-noise, as indicated by work of <cit.>, in which authors show that demonstrations, even one with randomly shuffled labels, can still significantly improve LLM's performance in the MetaICL dataset. Evaluation Benchmarks. Benchmarking is essential for understanding LLM performance across different domains. Existing benchmarks like AGIEval <cit.>, ChenLLMBench <cit.>, SCIEval <cit.>, PIXIU <cit.>, and MME <cit.> provide comprehensive datasets for evaluating LLMs. While these benchmarks are useful for understanding the general capabilities of LLMs, they do not capture the complexity of more open-ended and context-sensitive queries. Here, the added value of our work, as we believe the novel open-question validation set we created, fills that gap. § ORIGINALITY AND GENERAL IMPACT OF THE WORK ASSESSMENT Originality. In this paper, we argue that closed questions, such as multiple-choice or fill-in-the-blank formats, do not adequately reflect the challenges posed by open questions that require deep understanding and synthesis of information from diverse contexts. While <cit.> have shown that context significantly affects LLM performance, they have not quantified how different levels of context relevancy impact responses to different types of questions. Our research addresses this gap by creating a novel benchmark that focuses on open, challenging questions. These questions are paired with various types of contexts to systematically evaluate how context affects LLM performance. Impact of the paper. Furthermore, our work suggests areas for improving the performance of Retrieval-Augmented Generation (RAG). Current RAG studies focus on providing context during model inference. Given our observation of the inconsistent relationship between the relevance of context and model performance for different question types (open-form and closed-form), we believe that the context retrieved by comparing vector similarity using RAG may not always correlate with the most useful context for enhancing LLM inference performance and does not mitigate issues such as hallucinations and logic errors. We propose that the type of context selected should be tailored to the attributes of the type of questions with several practical propositions of the retrieval regions outlined in the discussion. § IS MORE RELEVANT CONTEXT ALWAYS BETTER? §.§ Novel question bank and evaluation methodology To investigate the relationship between the relevance of context and the performance of large language models (LLMs), we created an open-form questions dataset comprising physics and computer science questions of varying difficulty levels and originality. Next, we prepared contexts with four different levels of relevancy for each question in our dataset. The selected questions cover the following areas: quantum mechanics, physics for life science, electromagnetism, classical mechanics, and computer science. Solutions usually involve a combination of complex calculations and the application of conceptual knowledge. Each question is categorized under one of the three different difficulty levels: easy, medium, and hard. The difficulty of the question is defined by the grader according to their perceived complexity of the question. Additionally, each question is also categorized under one of three originality categories: known, paraphrased, and original. Known questions can be found online or in textbooks, paraphrased questions are modified versions of known questions, and original questions were handcrafted by the authors of this paper. For each question, we created a ground truth answer for scoring reference and four context types with different levels of relevance. The four context types are: (1) “no context” to serve as a control group, (2) “irrelevant context”, which consists of text on topics that do not match the subject of the question, (3) “vague context”, which incorporates some topics or keywords related to the question, and (4) “relevant context”, which provides reasoning context for the question, or answer to a highly related question. Next, for each unique pair of question-context, we generated a response employing the OpenAI's gpt-4-1106-preview model. After retrieving the responses, we constructed 160 question-response pairs, each accompanied by the corresponding ground truth. Aware that human grading can be subjective, we decided that each question would be evaluated by six independent graders using a pre-defined scoring sheet. This gave us 960 evaluation responses in total. The Supplementary Material includes examples of the questions and context types, as well as the evaluation sheets. §.§ Evaluation Our evaluation system comprised three main categories, Completeness and Relevancy (5 points), Logic and Reasoning (5 points), and Truthfulness (understood as lack of hallucination) (5 points). In addition, graders had the option to identify specific problems in the responses, such as hallucinations, omission, irrelevant, calculation error, and logic error. They could also highlight portions of the responses as incorrect, correct, or irrelevant. An open response section was provided for graders to give comments and feedback about the generated responses. Finally, graders were asked to rate their confidence in their own grading. These options allowed us to gain deeper insights into the grading process and to assess the quality of the generated responses in detail. A screenshot of the scoring interface can be found in the Supplementary Material. Each grader may have different biases and varying levels of expertise. To enhance the accuracy and reliability of our evaluation, we ensured that all graders assessed all 160 questions. This approach was essential for obtaining consistent and accurate results. By having multiple graders evaluate each response, we mitigated individual biases and ensured a more comprehensive assessment. This method captured a broader range of perspectives and expertise, leading to a more robust and reliable evaluation of the generated responses. As demonstrated later, this comprehensive grading significantly improved the accuracy and consistency of our findings. §.§ Results §.§.§ Context Relevance To illustrate the correlations between the context types and the quality of the corresponding generated responses, in Fig. <ref> panel A, we show the raw average scores of each context type for each grader. Notably, the results are rather noisy, with each grader having an individual tolerance for different types of errors, resulting in different reference levels for each of them. By design, each question was evaluated by each grader. This additional redundancy allows us to standardize the scores for each grader and then average them, resulting in reduced variance in the final results. This aggregation procedure is depicted in Fig. <ref> panel B. As a result, although the raw scores displayed differences in trends and values across all three grading rubrics, a clear trend appeared after we applied the aggregation procedure, as depicted in Fig. <ref> panel C. Counter-intuitively, a higher standardized average score was associated with no context, and the lowest score with the relevant context. §.§.§ Difficulty Levels and Originality Types To investigate how the difficulty of questions affects the quality of generated responses, we compared the results across three difficulty levels (easy, medium, and hard) for each of the four context types, as shown in Figure <ref>, panel A. We can observe a clear trend of decreasing scores as the difficulty of the questions increased from medium to hard, indicating that GPT-4's performance declines with higher question difficulty. This also indicates that human-perceived difficulty of the question was in fact, correlated with the factual difficulty experienced by GPT-4, a result interesting on its own. For easy and medium-difficulty problems, GPT-4 generated responses with similar scores, indicating that the alignment between the human-perceived and machine-perceived difficulty has its own limits. In Figure <ref>, panel B, we show the comparison between the aggregated standardized average score for the different levels of originality types for each context type. It is evident that GPT-4 scores highest for known questions, likely because these questions were part of its training data, and therefore GPT-4 has a higher chance to answer them correctly. Interestingly, the score for known questions given irrelevant context is twice as high as that for relevant context. This suggests that irrelevant context might be more helpful than relevant context for known questions, at least for the open type of question, as measured here. §.§.§ Result comparison In this section, we combined the standardized scores from all graders and compared them across different context types. Our results indicate that, on average, the responses generated with no additional context or with the help of irrelevant context are of higher quality than the responses generated for queries incorporating highly relevant context. This result is in striking difference to results of <cit.>. To further understand this discrepancy, in the next section, we replicate the key findings of <cit.>, and we discuss what might cause the difference in the behavior. § CITICAL COMPARISON WITH EXISTING STUDY §.§ Intro <cit.> demonstrates that in-context learning allows us to achieve significantly better results compared to the “no context” case. In addition, the authors show that in-context learning is robust to label noise. Namely, the authors show that context with randomly shuffled labels and “golden” context (with correct labels) have similar effects in enhancing the quality of generated responses for closed questions, such as multiple choice and true/false questions. However, to investigate the striking difference in the observed trends and to eliminate the effect of different versions of ChatGPT playing a potential role here, we decided to replicate the key results from <cit.> using precisely the same framework as above and using the same version of the LLM, namely gpt-4-1106. For the replication, we decided to use two different existing benchmarks, MetaICL <cit.> and a dataset from NephSAP <cit.>. The only significant element, differentiating this study from our previous evaluations, is that both of these datasets contain close-form questions. §.§ Data and Methodology Our evaluation of in-context learning of closed-form questions involves two datasets. For the MetaICL dataset, we take a subset of 10 different tasks, each containing multiple-choice questions. For the NephSAP dataset, we take multiple-choice questions within 20 different subjects. Details about tasks, subjects, and sample questions can be found in the Supplementary Materials. We conduct an 80-20 train test split for both the MetalCL dataset and the NephSAP dataset. For each multiple-choice question in the test set, we generate a response using the gpt-4-1106-preview model. We do it three times: once without any context, once with a randomly sampled demonstration with a different task or subject from the training set of the dataset, and once with a randomly sampled demonstration with the same subject or task from the training set. We also compute the embedding of the questions and the demonstrations. We bin the embedding similarity of each demonstration/response pair into separate bins. Treating the no-context response as a benchmark, we record the general score improvement of the response within each embedding similarity bin compared to the raw benchmark. §.§ Context Relevancy and Performance improvement In Fig. <ref>, we show the score improvement as a result of different contexts, using the no-context answer as the baseline. Note how context similarity is positively correlated with the mean score improvement in both of the closed-question datasets (MetalICL and NephSAP). This result is consistent with the arguments made by <cit.> and <cit.>. Note also that in both closed-question datasets, the context with the lowest levels of similarity scores has a tendency to have a negative mean improvement (meaning, adding context hurts the results). As contexts with low levels of similarities are more likely to be contexts with a different subject or task, this result is consistent with the findings in <cit.>, where irrelevant demonstrations can hurt the performance of LLM. This contrasts the results for the closed-form questions, as depicted in Fig. <ref>, panel C. Our open-form question results display a negative correlation between context similarity and mean improvement. The results suggest that, in this case, context with a lower level of similarity can be more helpful in improving the quality of the response, whereas context with a higher level of similarity can hurt the quality of the response. § DISCUSSION §.§ Impact of our work and the future directions Our results have suggested a significant difference between open-form question evaluation and close-form question evaluation, as the relationship between context-similarity and performance improvement is completely reversed in those two cases. The implications of this result are twofold. First, the difference between open-question evaluation and close-question evaluation invokes a new discussion on their different applicability in the context of in-context learning. Second, those mixed results suggest that similarity score might not be the best indicator for context selection in in-context learning, especially in cases that involve open-form questions. This has profound implications, especially in the context of Retrieval Augmented Generation (RAG) applications. For example, instead of selecting all points that lie in the vicinity of a certain point in the embedding space representing a query (cf. Fig.<ref>, panel A), a better choice could be to either exclude or at least diminish the impact of contexts that are too close to that point (cf. Fig.<ref>, panel B). This would lead to more interesting topologies. Instead of sampling the context from a hypersphere, we could sample from shells of various thicknesses. §.§ How should we evaluate in-context-learning? Open vs Close The different behaviors exhibited in open-form question evaluation and closed-form question evaluation stem from a different treatment of context in those two cases. We provide a hypothetical interpretation of that mechanism. In closed-form multiple-choice questions, the evaluated language model is treated as a classification model. A relevant demonstration provided as a context can improve the LLM's performance by aligning it with the correct choice. In open-form questions, the evaluated language model is treated as a generative model, and the response is open-form. Instead of being either correct or incorrect, an open-form response can be anywhere in between. A relevant context provides alignment with one way of approaching the question, but it can also introduce bias, leading to performance degradation instead of improvement. §.§ How should we select context with respect to RAG The difference between the relationship between context relevancy and performance in open-form and closed-form questions suggests that the RAG is highly application-dependent. For example, the strategy for context retrieval for open-form applications should be different from the strategy used in closed-form applications. It is also important to be mindful when evaluating RAG, as common closed-form benchmarks might not be good indicators of RAG's performance in open-form applications. When designing an RAG, especially in open-form applications, it is important to include some other factors than pure embedding distance or relevancy. Sometimes including a piece of context that is not as close in embedding distance to the question might be helpful as it does not reinforce the hidden bias inside the question. plainnat § DATA AND CODE AVAILABILITY Data and code can be found in the following GitHub repository: https://github.com/mikelixiang88/context-matters.githttps://github.com/mikelixiang88/context-matters.git § ACKNOWLEDGEMENTS We would like to take this opportunity to thank Professor Stephan Haas for helpful discussions at the early stage of this project and Anurag Maravi for his engagement during the preliminary stage of the work. § AUTHOR CONTRIBUTIONS X.L., H.T., and M.A. contributed to the conceptual design, X.L. and H.T. developed the Python code and conducted the experiments, X.L., H.T., S.C., and M.A. analyzed and interpreted the results. All authors equally contributed to the creation of the novel dataset, M.A. provided supervision and proposed the experiment measuring the impact of the context. All the authors contributed to writing the article. § COMPETING INTERESTS The authors declare no competing interests. § SUPPLEMENTARY MATERIAL § SAMPLE QUESTION §.§ Sample Question for Open Dataset Question: Given the wavelength of an electron is 0.364 · 10^-9 m, calculate the speed of the electron. Ground Truth for Grading: λ = 0.364 × 10^-9 m Mass of electron, m = 9.1 × 10^-31 kg Planck's Constant, h = 6.62607015 × 10^-34 Js The de Broglie wavelength is given by λ = h/mv Velocity of the electron, v = 2 × 10^6 ms^-1 Relevant Context The De Broglie states that λ = h/mv. The mass of an electron is about 9.109 · 10^-31 kg Vague Context Wave-particle duality is the concept in quantum mechanics that quantum entities exhibit particle or wave properties according to the experimental circumstances. Irrelevant Context Quantum physics is the study of matter and energy at the most fundamental level. At very small scale, classical theories may not be applicable any more. That is where quantum theories come into play. §.§ Sample question for MetaICL dataset Test Input: Bird feet can also vary greatly among different birds. Some birds, such as gulls and terns and other waterfowl, have webbed feet used for swimming or floating (Figure below). Other birds, such as herons, gallinules, and rails, have four long spreading toes, which are adapted for walking delicately in the wetlands (Figure below). You can predict how the beaks and feet of birds will look depending on where they live and what type of food they eat. Flightless birds also have long legs that are adapted for running. Flightless birds include the ostrich and kiwi. Some birds, such as gulls and terns and other waterfowl, have what type of feet used for swimming or floating? Test Output: webbed Test Options: * lobed * quad toed * bipedal * webbed For our task selections from the MetaICL dataset, please visit our GitHub repository, where the task category selections and code are presented. §.§ Sample question for NephSAP dataset A 54-year-old man with ESRD is admitted for management of presumed catheter–related bacteremia. He had no pre–ESRD nephrology care and recently started maintenance hemodialysis on an urgent basis for symptomatic uremia. Two days ago, he developed acute onset of fever to 40 C, chills, and rigors during dialysis. After obtaining blood cultures, he received intravenous vancomycin and ceftazidime at the outpatient dialysis center before admission. His tunneled dialysis catheter is removed because of erythema and purulent exudate at the exit site. His fever is now resolved. He does not have back pain, cough, dental complaints, or rash. On physical examination, he is now afebrile. The BP is 136/88 mmHg. There is a 1-cm area of erythema at the tunneled dialysis catheter removal site. He has no stigmata of emboli. He has no obvious dental issues on gross inspection of the oropharynx. The cardiac examination shows a grade 2 early peaking systolic crescendo-decrescendo murmur along the right sternal border without radiation. The remainder of the examination is unremarkable. The blood cultures drawn before implementation of antibiotic therapy grew S. aureus within 12 hours. The results of sensitivity testing and repeat blood cultures are pending. Laboratory studies show sodium 138 mEq/L, potassium 3.8 mEq/L, chloride 102, total CO2 25 mmol/L, BUN 45 mg/dl, and creatinine 6.4 mg/dl. In addition to continued antibiotic therapy, which ONE of the following is the next BEST step in management? A. Arrange for placement of a tunneled dialysis catheter today B. Transesophageal echocardiography C. Magnetic resonance imaging of the spine D. Bone scintigraphy (bone scan) E. Examination for an occult dental source of bacteremia § CONTEXT TYPES § GENERATED ANSWER §.§ Sample Response for Question in A.1 Response with No Context: To calculate the speed of the electron, we can utilize de Broglie's equation that relates the momentum of a particle to its wavelength. Here is the formula: λ = h/p Where: * λ is the wavelength * h is Planck's constant * p is the momentum. If we rearrange the terms to express momentum: p = h/λ The momentum of a particle can also be described as its mass times its velocity: p = m · v We know that the mass of an electron (m) is 9.11 × 10^-31 kg. And Planck's constant (h) is 6.626 × 10^-34 Js. So the velocity (v) is given as: v = h/mλ Substituting the known values, we get: v = 6.626 × 10^-34/9.11 × 10^-31· 0.364 · 10^-9 After solving this, the velocity of the electron comes out to be approximately 7.24 × 10^5 m/s. Response with Irrelevant Context: The speed of an electron given its wavelength can be calculated using the de Broglie's equation: λ = h/mv where: * λ is the wavelength, * h is Planck's constant, which is roughly equals to 6.626 × 10^-34 Js, * m is the mass of the electron, which is roughly equals to 9.109 × 10^-31 kg, * v is the velocity of the electron. We can rearrange the equation to solve for the velocity: v = h/mλ Substitute the given values into the equation: v = 6.626 × 10^-34 Js/9.109 × 10^-31 kg× 0.364 × 10^-9 m and calculate the velocity. Response with Vague Context: The wavelength of a matter wave, such as an electron, can be found using De Broglie's equation, which is given by: λ = h/p where: * λ is the wavelength * h is Planck's constant (6.626 × 10^-34 Js) * p is the momentum of the particle, which can be expressed as p = mv Since we want to find the speed of the electron (v), we can rearrange this equation in terms of v: v = h/mλ Given that λ = 0.364 × 10^-9 m and the mass of an electron is m = 9.109 × 10^-31 kg, we can substitute these values into the equation: v = 6.626 × 10^-34/9.109 × 10^-31× 0.364 × 10^-9 Evaluating this will give the speed of the electron in m/s. Response with Relevant Context: The De Broglie equation relates the wavelength of a particle to its momentum. According to De Broglie's principle, every object in motion has a wave character. The De Broglie wavelength is given by the formula: λ = h/mv where: * λ is the wavelength, * h is Planck's constant (6.626 × 10^-34 m^2 ·kg / s), * m is the mass of the particle, * v is the velocity (or speed) of the particle. We are given that λ = 0.364 × 10^-9 m, and the mass m of an electron is 9.11 × 10^-31 kg. We can find the speed v by rearranging the above formula: v = h/mλ Substitute the known values into the rearranged equation to find v, the speed of the electron: v = 6.626 × 10^-34 m^2 ·kg / s/(9.11 × 10^-31 kg) (0.364 × 10^-9 m) Solving the above expression will give us the speed of the electron. § GRADING INTERFACE To better evaluate the open response to our physics questions, we modified the potato annotation system <cit.> and applied it as our evaluation system. Our evaluation system not only allows users to select numeric grades for each response but also enables the user to highlight parts of the response, apply labels, and write descriptions to justify their grading. In addition, the system randomly shuffles the order of the responses for each grader to mitigate any potential bias in grading as a result of the ordering of responses. A short video tutorial is provided at the beginning page to provide guidance and alignment in grading. A screenshot of the interface of the evaluation system is shown in Fig. <ref>. The system is also accessible via the link: http://quantumgpt.science:8080/?PROLIFIC_PID=testuser<http://quantumgpt.science:8080/?PROLIFIC_PID=testuser>. § SANITY CHECK To check whether our context relevancy is well defined, we compute the embedding of the questions and their respective contexts for both our open-form question dataset and the two closed-form question datasets we use. We then calculate the cosine distance between the embedding of each question and the different contexts associated with them. We show the results for the open question dataset in Fig. <ref>. We computed the embedding of each question and each context using OpenAI's “text-embedding-3-large” model. For the no-context part, we used a space as a placeholder instead of an empty string. As expected, the results show that more relevant contexts, as perceived by us when designing the dataset, receive a higher mean similarity score with their respective questions. Different question types can result in a large standard deviation in similarity scores in different contexts. We show the details breakdown of those results in Fig. <ref>. All question types except hard paraphrased questions display the same trend, confirming the relationship between context types and embedding similarities. For the closed datasets, the similarity score between context and question is shown in Table <ref>. For both datasets, the same task/subject demonstrations possess a higher mean similarity score than the different task/subject demonstrations. To further verify this relationship, we have also plotted the similarity score of the same task demonstrations and different task demonstrations for each task in the MetaICL dataset in Fig. <ref>. The results confirm that the same task demonstration displays higher mean similarity than the different task demonstration in every task in the dataset.
http://arxiv.org/abs/2407.02367v1
20240702153512
Rediscovering Bottom-Up: Effective Forecasting in Temporal Hierarchies
[ "Lukas Neubauer", "Peter Filzmoser" ]
stat.ME
[ "stat.ME" ]
Rediscovering Bottom-Up: Effective Forecasting in Temporal Hierarchies Lukas Neubauer TU Wien Peter Filzmoser TU Wien July 8, 2024 ============================================================================= § ABSTRACT Forecast reconciliation has become a prominent topic in recent forecasting literature, with a primary distinction made between cross-sectional and temporal hierarchies. This work focuses on temporal hierarchies, such as aggregating monthly time series data to annual data. We explore the impact of various forecast reconciliation methods on temporally aggregated ARIMA models, thereby bridging the fields of hierarchical forecast reconciliation and temporal aggregation both theoretically and experimentally. Our paper is the first to theoretically examine the effects of temporal hierarchical forecast reconciliation, demonstrating that the optimal method aligns with a bottom-up aggregation approach. To assess the practical implications and performance of the reconciled forecasts, we conduct a series of simulation studies, confirming that the findings extend to more complex models. This result helps explain the strong performance of the bottom-up approach observed in many prior studies. Finally, we apply our methods to real data examples, where we observe similar results. § INTRODUCTION Forecast reconciliation has been a very popular topic in recent forecasting literature. It covers the questions on how to properly forecast time series which have been aggregated in a certain way. This aggregation could come from a cross-sectional aspect where a collection of time series is aggregated across different variables such as location or organizational unit. In contrast, the time series could also be aggregated on a temporal basis, such as monthly, quarterly, and annual time series. Naturally, both types of aggregation might be combined in any way, leading to cross-temporal hierarchies. The field of hierarchical forecast reconciliation investigates how to handle forecasting those hierarchies such that the resulting forecasts match the aggregation properties of the hierarchy. In addition, it is often examined how the performance of the reconciliation methods yielding so-called coherent forecasts is compared to original, possibly non-coherent forecasts. A very recent and extensive review of forecast reconciliation is given in <cit.>. Many extensions are discussed such as adding complex constraints (non-negativity, integer-based time series, ...) or probabilistic forecasting. In this paper we investigate temporal hierarchies as introduced by <cit.>. The authors argue that already existing forecast reconciliation methods can be applied to temporally aggregated time series in a straightforward manner. However, no further assumptions besides the base forecasts being unbiased are investigated, especially since no work is available looking at the theoretical implications of reconciliation methods assuming certain data-generating processes. We fill this gap of research and examine the performance of forecast reconciliation in temporal hierarchies in the theoretical framework of temporally aggregated time series models such as ARIMA models. The effects of temporal aggregation in autoregressive models were first studied by <cit.>. The authors prove that if some data is generated by an autoregressive model of order p, then a non-overlapping aggregate of these data will also follow a similar generating process. Namely, the autoregressive order of the aggregate remains at the same order p while there might exist a moving average part of a certain order as well. In fact, the authors give a maximum order for this moving average part of the process. <cit.> give a generalized overview of this theory and extend it to general SARIMA models. In temporal hierarchies, simple reconciliation techniques such as bottom-up approaches are often applied. A bottom-up forecast is generated by aggregating the forecasts of the disaggregated series. <cit.> suggest that forecasts of aggregated time series can be improved by using bottom-up forecasts, as long as the aggregated model includes a significant moving average component. Without this component, the improvements may be minimal or nonexistent. In this work, we extend this analysis by considering more complex models and more intricate temporal hierarchies. We take an additional step to analyze the performance of the bottom-up approach compared to more sophisticated reconciliation methods, thereby linking the fields of temporal forecast reconciliation and temporally aggregated time series models. Although this was experimentally examined in <cit.>, the results have yet to be theoretically justified. In general, the connection between these two fields has not been established from a theoretical perspective. The paper is structured as follows. In Section <ref> we briefly discuss the ideas of hierarchical forecast reconciliation and recent advances, in particular regarding temporal hierarchies (Section <ref>) as well as the basics of temporally aggregated time series models (Section <ref>). This is followed by the linkage of those two topics in Section <ref> where we discuss the theoretical implications of forecast reconciliation on the temporally aggregated time series. In Section <ref>, we investigate the discussed implications in a simulation study, followed by real data applications in Section <ref>. Finally, we give concluding remarks in Section <ref>. § RELATED WORK §.§ Hierarchical Forecast Reconciliation First introduced by <cit.>, optimal forecast reconciliation is formulated as follows. Consider a multivariate time series 𝐲_1,…,𝐲_T∈ℝ^n fulfilling possible linear constraints, namely 𝐲_t = S𝐛_t, where S is a n× n_b summing matrix with n_b < n, and b_t denotes the bottom level series of the hierarchy. The summing matrix is defined by the type of hierarchy of interest. For example, a matrix with n=7 and n_b=4 given by S=[ 1 1 1 1; 1 1 0 0; 0 0 1 1; 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ] could be understood as a 3-level hierarchy of 4 districts and 2 states of one country whereby the first two districts are part of the first state and so on. Such linear constraints are naturally fulfilled on the observed data because it is set up to do so. When forecasting such series, we want the forecasts to also adhere to the same constraints which leads to so-called coherent forecasts, namely 𝐲̂_t+h|t=S𝐛̂_t+h|t, where 𝐲̂_t+h|t and 𝐛̂_t+h|t denote the corresponding h-step forecasts. However, by forecasting each time series of the hierarchy individually we will most likely not obtain such coherent forecasts. This is where forecast reconciliation proves crucial. Historically, simple reconciliation methods such as bottom-up or top-down approaches have been and remain in use. The bottom-up approach starts at the bottom level of the hierarchy, using forecasts from this level to construct forecasts for the entire hierarchy. This method avoids information loss due to aggregation but can be challenging because the bottom level time series may be harder to forecast accurately due to noise or other factors. On the other hand, top-down reconciliation uses only top-level forecasts and requires a proportion vector 𝐩 of size n to break down these forecasts into coherent lower-level forecasts, with the main challenge being the identification of an appropriate breakdown vector. In the seminal work by <cit.>, the following regression problem was proposed to achieve least-squares reconciliation. Let 𝐲̂_h=𝐲̂_t+h|t represent a vector containing h-step base forecasts in a stacked manner, and let S be a summation matrix defined by the hierarchy of interest. Base forecasts refer to any appropriate and possibly incoherent forecasts for the corresponding time series, which we assume are available at this stage. Then write 𝐲̂_h = Sβ_h+ϵ_h, where β_h are the regression coefficients indicating the unknown mean of the bottom level, and ϵ_h is the unobservable reconciliation error with zero mean and covariance matrix V_h. Solving this regression problem using generalized least-squares leads to the generalized linear solution of β̂_h=G_h𝐲̂_h and reconciled forecasts 𝐲̃_h=SG_h𝐲̂_h. The n_b× n matrix G_h maps the base forecasts into appropriate bottom level forecasts and is given by G_h = (S'V_h^-1S)^-1S'V_h^-1. The regression problem was inspired by the authors' findings that simple reconciliation methods, such as bottom-up or top-down, can all be expressed as 𝐲̃ = SG𝐲̂ with an appropriate mapping matrix G. For example, setting G = (0_n× (n-n_b) I_n) or G = (𝐩 0_n× (n_b-1)), where 0_r× q denotes a r× q matrix of zeros of size, I_q is the identity matrix of size q and 𝐩 is a proportion vector of size n, yields the bottom-up or top-down methods, respectively. The regression problem (<ref>) was introduced to determine the optimal mapping matrix in a least-squares sense. It is further argued in <cit.> that if the base forecasts are unbiased, that is 𝔼[𝐲̂_h]=𝔼[𝐲_t+h], and G is such that SGS=S, then the reconciled forecasts are also unbiased. The condition of SGS=S is equivalent to SG being a projection matrix <cit.>, ensuring that already coherent forecasts remain unchanged in this transformation. One essential problem is that V_h is not known and not even identifiable as shown in <cit.>. In <cit.>, the authors avoided this by setting V_h=k_h I_n with some consistency constant k_h (which need not be computed since it cancels out in further calculation steps) and hence weighting all series equally, disregarding any level of aggregation or performance of base forecasts. This simplification results in an OLS solution and G=(S'S)^-1S'. The transformation matrix SG=S(S'S)^-1S' is then an orthogonal projection with respect to the Euclidean distance, ensuring minimal change of the forecasts while reducing squared forecast errors of all levels of the hierarchy <cit.>. A scaled reconciliation method is introduced in <cit.> where the authors set V_h=k_hdiag(W_h) with W_h=Cov(𝐲_t+h|h-𝐲̂_h) being the covariance matrix of the base forecasts, leading to a weighted linear solution. In the work of <cit.> the so-called minimum trace estimator is proposed by minimizing the trace of the covariance of the reconciled errors subject to unbiasedness, thus min_Gtr Cov(𝐲_T+h|h-𝐲̃_h) = min_G tr Cov(𝐲_T+h|h-SG𝐲̂_h) = min_G tr SG W_h G'S', subject to SGS=S. The trace of a n× n matrix is tr(A)=∑_i=1^n A_ii. This leads to G_h = (S'W_h^-1S)^-1S'W_h^-1. Thus, instead of estimating V_h, we now need to estimate the covariance of the base forecast errors, W_h, which is more feasible. This method is equivalent to the generalized linear solution, with the regression-based solution being a special case. The transformation matrix SG now represents an oblique projection. By dropping the assumption of an orthogonal projection, we allow for greater forecast improvements on average. However, <cit.> argue that for some realizations, the performance of the reconciled forecasts may be worsened. Estimating W_h presents difficulties, especially for complex hierarchies and forecast horizons beyond h > 1, due to the limited sample size determined by the number of top-level observations. Therefore, it may be practical to revert to simpler estimates as previously described. Additionally, <cit.> propose sample and shrinkage estimators by setting W_h = k_h Ŵ_1 and W_h = k_h (λdiag(Ŵ_1) + (1 - λ) Ŵ_1), λ∈ (0,1), respectively, with appropriate consistency constants. The shrinkage estimator is particularly useful when n > T, which can result in a singular sample covariance matrix. The authors of <cit.> also give a different type of estimator, denoted by structural scaling. It is proposed to set W_h=k_hdiag(S 1_n_b) implying that each forecast is scaled according to the number of series in its level of the hierarchy. Here, 1_n_b is a vector with n_b entries of one. Overall, the minimum trace method addresses three key aspects. Firstly, it produces coherent forecasts, which is the most crucial factor. Secondly, as long as the base forecasts are unbiased, the reconciled forecasts will also be unbiased. Lastly, it enhances forecast performance by minimizing the forecast error variance across all series on average. §.§ Temporal Hierarchical Forecast Reconciliation While forecast reconciliation has not been developed with temporal hierarchies in mind, it can be applied to them naturally as discussed in <cit.>. Temporal hierarchies allow for even more sophisticated methods for estimating the covariance matrix of the base forecast errors. Let y_t with t=1,…,T be a univariate time series of interest of a certain frequency m. A k-aggregate, where k is a factor of m, is defined to be y_j^[k] = ∑_t=t^∗ +(j-1)k^t^∗ + jk -1 y_t, j=1,…,⌊ T/k⌋, where t^∗ is the starting point of the aggregation to ensure non-overlapping aggregates. The resulting frequency is then M_k=m/k. To have a common index across all levels of aggregation, the authors set i=1,…,⌊ T/m⌋ and y_M_k(i-1)+z^[k] = y_j^[k], z=1,…,M_k, such that i controls the top-level steps and z determines the steps within each aggregation period. That way we can write one time step of the hierarchy as the vector given by 𝐲_i = (y_i^[m], …, 𝐲_i^[k_2]', 𝐲_i^[k_1]')', where 𝐲_i^[k] = (y_M_k(i-1)+1^[k], y_M_k(i-1)+2^[k], …, y_M_k i^[k])' denotes the stacked entries of the time series at aggregation level k. This implies that 𝐲_i = S𝐲_i^[1], where S is an appropriate summing matrix as defined in general forecast reconciliation. According to <cit.> we write the levels of aggregation in descending order as {k_p, …, k_2, 1} with k_p=m. For a quarterly-biannual-annual aggregation scheme this yields k∈{4,2,1}. A corresponding visualization is available in Figure <ref>. The fact that 𝐲_i = S𝐲_i^[1] suggests we can set up a very similar regression problem based on the base forecasts as in Eq. (<ref>). The minimum trace approach then yields ỹ_h = S(S'W_h^-1S)^-1S'W_h^-1ŷ_h, where ŷ_h are the stacked base forecasts across the entire hierarchy, and W_h=Cov(𝐲_h-ŷ_h) denotes the covariance matrix of the stacked base forecast errors. Specifically, this means that on each aggregation level, we require M_k h-step forecasts, which can be already challenging to obtain properly. As in conventional forecast reconciliation, the estimation of W_h can be difficult because the sample size is bounded by the number of observations on the top level of the aggregation hierarchy. Thus, the authors propose several simplified covariance estimators. One of them is similar to the scaled reconciliation of <cit.> by setting W_h=k_hdiag(Ŵ_1), while structural scaling is also proposed as in <cit.>. Temporal aggregation allows for more refined methods to enhance the estimation of the covariance matrix. <cit.> suggest modeling the autocorrelation structure of the forecasts, leading to four different estimators. The autocovariance scaling estimator estimates the full autocovariance matrix at each aggregation level, while the Markov scaling assumes a first-order Markov structure, estimating only lag 1 correlations per aggregation level. Additionally, the authors propose using GLASSO to estimate the inverse cross-correlation matrix and a cross-correlation shrinkage estimator, similar to <cit.>. It is worth noting that all correlation-based estimators can be combined with variance and structural scaling variances. In a subsequent work by <cit.>, the authors explore dimension reduction. They propose an eigendecomposition of the cross-correlation matrix and construct a filtered precision matrix by selecting the first few eigenvectors and applying shrinkage to the eigenvalues. Such an estimator is especially useful when forecasting a very deep and complex hierarchy. §.§ Temporal Aggregation Temporal aggregation of series was first studied in the seminal work of <cit.>. A rather recent review of the most relevant advances in this field can be found in <cit.>. The models discussed in these works are mostly ARIMA-based, and we will briefly explain the essential ideas and results. Consider a univariate time series y_t, t=1,…,T observed at some frequency. A k-aggregate series is defined, equivalent to Eq. (<ref>), by y_t^∗ = ∑_i=0^k w_i y_t-i. To obtain non-overlapping aggregates, a new time scale is introduced by setting T=kt, and thus y^∗_T = y^∗_kt with y^∗_T+1=y^∗_k(t+1). Hence, y^∗ is a series at lower frequency because observations are only available every k time steps. The more general definition of Eq. (<ref>) allows for different types of aggregation. The most common one is the so-called flow aggregation with w_i=1. This type of aggregation is just the sum in each aggregation period. Another type is stock aggregation. One usually sets k=0,w_0=1. Thus, only the last observation in each period is equal to the period's aggregate. As in most literature, we also focus on the flow type of aggregation. Now assume that the higher frequency series y seen as a random process is an ARIMA(p,d,q) model. We are interested in the model specification of y^∗ after aggregation. The theory gives us that y^∗ is again an ARIMA model as discussed in <cit.>. We have that y^∗∼ARIMA(p,d,r), r≤⌊p(k-1)+(d+1)(k-1)+q/k⌋. The autoregressive and integrated orders of the aggregated series remain unchanged, while the moving average order increases. The theory also provides a method to compute the exact parameters of the aggregated series. Specifically, the roots of the autoregressive polynomial of the AR component of the aggregated series are equal to the k-th power of the AR roots of the disaggregated model. Thus, assuming stationarity, the AR effect in the aggregate model diminishes as the aggregation period increases. Simultaneously, the MA effect becomes more significant. However, calculating the MA coefficients is more complex. These coefficients can be determined by comparing the autocorrelation functions of the aggregated model and the transformed disaggregated model, leading to several potentially non-linear equations. The unknowns in these equations include the MA coefficients, the innovation variance, and a possible non-zero mean. This theory has also been extended to more complex ARIMA models like ARIMAX or even SARIMA where the results are very much similar. There are even results when looking at volatility models such as GARCH. The reason why the aggregated MA order in Eq. (<ref>) is only bounded above by the right-hand side is due to the possibility of polynomial term cancellation in the disaggregated model, which can result in much simpler models. An extreme example is provided in <cit.>, where the authors show that if the disaggregated model is an AR(9) model with non-zero coefficients at lags 3, 6, 9, then the 3-aggregated series will simplify to an AR(3) model. This simplification is reasonable because the disaggregated series already contains the essential aggregation information. In the same work of <cit.>, the forecast performance of aggregation is also investigated. The authors argue that if the aggregated series exhibits a moving average part, then its forecast error can be reduced when performing an according bottom-up forecast using the disaggregated series. This makes sense since aggregation leads to a loss of information. However, this is only the case if the moving average part is significant. If not, then the improvements are very small or even non-existent. Since it might not be clear how such model aggregation works on paper, we put a thorough calculation of the simple AR(1) model in <ref>. § TEMPORAL HIERARCHICAL FORECAST RECONCILIATION IN TEMPORALLY AGGREGATED MODELS In this section, we will theoretically integrate the fields of temporal forecast reconciliation and temporally aggregated ARIMA models. To the best of our knowledge, this is the first time such an integration has been attempted. While <cit.> utilized the theory of temporally aggregated ARIMA models, their approach was primarily experimental. They examined the performance of temporal forecast reconciliation methods, such as variance scaling, and compared them to a simple bottom-up approach under varying levels of uncertainty. Specifically, they conducted experiments with fixed model orders and parameters, fixed orders alone, or automatically selected models based on model selection criteria. The authors found that temporal forecast reconciliation and bottom-up methods perform equally well in highly certain settings, but the performance of bottom-up methods declines when models are misspecified. In general, the data-generating process has not been of much interest so far in the field of temporal forecast reconciliation because it has been developed as a post-hoc procedure to transform base forecasts coherently. In the theory of temporally aggregated models, the combination of forecasts of different levels to achieve coherent or even better forecasts has not been looked at. Our contribution is as follows: Utilizing the theoretical model of aggregation, we will derive the theoretical covariance matrix of the base forecast errors, denoted as W, given in Lemma <ref>. This covariance matrix will then be employed to perform the minimum trace estimation manually. Through matrix algebra, we will demonstrate in Theorem <ref> that the resulting mapping matrix G corresponds to a bottom-up forecast. Consequently, we show that within the framework of aggregated ARIMA models, the optimal forecast reconciliation technique is indeed the bottom-up approach. Building on the insights from Section <ref>, we aim to manually implement the minimum trace reconciliation method. To do this, we need the covariance matrix of the base forecast errors, which we can readily compute. To maintain simplicity, we will initially focus on the straightforward case of an AR(1) model and subsequently discuss more complex models. The first result in Lemma <ref> is about the covariance structure of the aggregated model. Its proof can be found in <ref>. The covariance matrix W_1 of 1-step forecast errors in a k-aggregated AR(1) model with parameter ϕ and innovation variance σ^2 is equal to W_1 = [ σ_∗^2 σ^2 1_k'ΦΦ'; σ^2 ΦΦ'1_k σ^2 ΦΦ' ] where 1_k denotes a vector of ones of length k, Φ is a lower triangle matrix given by Φ = [ 1 0 0 … 0; ϕ 1 ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ ⋱ ⋮; ϕ^k-2 ⋱ ⋱ ⋱ 0; ϕ^k-1 ϕ^k-2 … ϕ 1 ], and σ_∗^2 denotes the innovation variance of the aggregated model. Based on Lemma <ref> we now manually compute the optimal unbiased reconciliation matrix, summarised in Theorem <ref>. The proof is available in <ref>. The minimum trace reconciliation method in a k-aggregated AR(1) model is equal to a bottom-up approach, implying that SG^∗ = [ 0 1_k'; 0_k I_k ], where 0_k is a vector of zeros of length k and G^∗ denotes the optimal mapping matrix from problem (<ref>). Theorem <ref> indicates that the optimal unbiased reconciliation method for the aggregated AR(1) model is the bottom-up approach. Consequently, the forecasts at the bottom level remain unchanged, with no potential for enhancing forecast accuracy. Conversely, the aggregated forecast is disregarded in any form of combination. This outcome elucidates why the bottom-up approach frequently demonstrates effectiveness in both simulation studies and real-world data applications, thus bolstering its practicality. Before moving on to the experimental part of this study, we aim to illustrate how this theorem works using a sample-based approach. In Figure <ref>, the average transformation matrix SG for a two-level hierarchy is presented. To do this, we simulated 100 models and estimated the complete sample covariance matrix based on the simulations. The models used consist of an AR(1) model with parameters ϕ=0.8,σ^2=1 at the lower level, which is then combined into an ARMA(1,1) model at the higher level of the hierarchy with k∈{4,1}. The nodes of the hierarchy are shown on both axes, with 1-1 representing the entry at the top level and 2-i representing the i-th step of the lower level. This precisely specifies the transformation matrix as used in Theorem <ref>. The first row shows the effects of the base forecasts on the reconciled top-level forecast. It is evident that there is little impact from the top-level base forecast, with nearly equal weights close to 1 for the bottom level base forecasts. Similarly, the following 4 rows demonstrate the weights for the reconciled bottom level forecasts, with a zero column followed by the identity matrix I_4. This indicates that the reconciled bottom level forecasts closely match the bottom level base forecasts. In summary, the tendency for a bottom-up reconciliation approach is clear. In Section <ref>, we further investigate this theorem experimentally to gain a deeper understanding. A natural extension of Theorem <ref> is to increase the depth of the hierarchy. Figure <ref> in <ref> shows the transformation matrix SG for a three-level hierarchy with k ∈{4, 2, 1}, similar to Figure <ref>. While the results are less clear-cut, the tendency towards a bottom-up approach remains evident. Specifically, the reconciled first-level forecast is constructed using similar components from the lowest level, whereas the reconciled bottom level relies solely on base bottom level data. The standard errors, indicated in parentheses, show that the first three columns are close to zero, meaning that the forecasts for the first and second levels of the hierarchy do not carry much weight. In other words, the forecast for the first half-year is derived from the first two quarters, and similarly for the second half-year. § EXPERIMENTS In this section, we experimentally investigate different types of forecast reconciliation methods in the framework of temporally aggregated time series models and beyond. We evaluate the results based on percentage errors, namely for aggregation parameter k we obtain a relative mean squared error of rMSE^[k](𝐲̃, 𝐲̂) = ∑_i 𝐲̃_i^[k] - 𝐲_i^[k]_2^2/∑_i 𝐲̂_i^[k] - 𝐲_i^[k]_2^2 - 1, where 𝐲̃_i^[k] denotes the i-th vector of reconciled forecasts of aggregation level k, 𝐲̂_i^[k] is the i-th vector of the base forecasts of aggregation level k, and ·_2^2 is the squared Euclidean norm. We analyze both in-sample (training) reconciliation errors and out-of-sample (test) reconciliation errors to assess generalizability, aggregating the corresponding observations accordingly. Depending on the level of aggregation, we may encounter multi-step ahead forecasts. To simplify, we aggregate these multi-step forecasts, providing a single error measure for each aggregation level. The test reconciliation forecasts are acquired through the following procedure. The reconciliation method employed is trained exclusively on the training data, meaning that the covariance matrix and the corresponding base ARIMA models are estimated solely based on the training data. Subsequently, forecasts for h steps ahead are generated for the test data in a cumulative manner, effectively utilizing the test data for the base test forecasts. MSE values are computed for each level of the hierarchy as well on an overall level by taking the sum of MSEs across all levels. The reason we consider MSE instead of a different error measure is that the minimum trace reconciliation method exactly minimizes the sum of the error variances. For a robustness check of the results, we also consider a relative mean absolute error and use it to calculate percentage errors. Namely, rMAE^[k](𝐲̃, 𝐲̂) = ∑_i 𝐲̃_i^[k] - 𝐲_i^[k]_1/∑_i 𝐲̂_i^[k] - 𝐲_i^[k]_1 - 1, where ·_1 is the absolute-value norm. This error measure is inherently less sensitive to outliers. We have focused on reporting results for rMSE to keep things concise. The conclusions remain consistent even when considering rMAE or similar relative error measures. Overall, if a percentage error is below 0, it indicates that the reconciled forecasts perform better, whereas errors above 0 suggest the opposite. It is important to note that we are only examining relative errors, focusing on the performance of the temporally reconciled forecasts rather than the base forecasts. Our aim is to evaluate how different types of temporal forecast reconciliation methods perform. §.§ Autoregressive Models of Order 1 In the first experiment, we want to demonstrate the implications of Theorem <ref>. We simulate stationary AR(1) data on the bottom level of the hierarchy and aggregate them to obtain the remaining levels of the hierarchy. The parameters we vary are * Sample size on the top level n=20,50,100, * AR parameter ϕ=-0.9,…,0.9, * Innovation variance on the bottom level σ^2 =1,5, * Hierarchy size k∈{4,1},{5,1},{12,4,1}, * Forecast horizon h=1,2, and * Fixed order of the ARMA models to remove model uncertainty which corresponds to Scenario 2 of <cit.>, or automated model selection (Scenario 3). For each setting we simulate N=50 time series and compute training and test rMSE values. The training data always consist of 75% of the total data. The covariance estimators we focus on in this simulation are * OLS: Ŵ_h = k_h I, * Full Cov.: Ŵ_h = 1/0.75n∑_i=1^0.75n(𝐞̂_i^(h))(𝐞̂_i^(h))', where 𝐞̂_i^(h) denote the i-th vector of h-step residuals of the base forecasts, and * Spectral Scaling <cit.>: * Shrink the empirical cross-correlation matrix R to R_shrink=(1-ν)R+ν I * Eigen-decompose this shrunk cross-correlation matrix by R_shrink=VΛ_shrinkV' where R=VΛ V'. * Reconstruct the filtered precision matrix by Q=(WAW'+cI)^-1 such that W contains the first n_eig columns of V and A=diag((1-ν)λ_1+ν - c, …, (1-ν)λ_neig+ν - c) with c being the average of the remaining smallest shrunken eigenvalues. * Set Ŵ_h^-1 = D_var^-1/2QD_var^-1/2 where D_var corresponds to variance scaling. The two hyperparameters ν,n_eig are chosen in a time series cross-validation procedure. The authors do not follow this procedure and rather rely on an optimally chosen shrinkage parameter ν (<cit.>) and a fixed number of chosen eigenvectors n_eig. Other estimators, including various shrinkage estimators and scaling variants, were initially considered in this simulation but produced results very similar to those listed. Additionally, the bottom-up approach was also examined. §.§.§ One-Step Ahead At first, we take a look at the performance of the bottom-up approach compared to using the full covariance matrix for reconciliation. Figure <ref> shows the difference of in-sample rMSE values for h=1, k∈{4,1}, σ^2=1 as well as fixed orders of the models to remove model uncertainty. We clearly observe that both methods result in very similar improvements once the covariance matrix can be estimated properly. The differences are driven by the top level of the hierarchy since most changes are to be expected there. Thus, the theoretical results also hold in this simulation setting. Figure <ref> shows the test differences in rMSE. While the differences are indeed higher than expected, the theoretical results still hold on the test sets, and we can conclude that the full covariance matrix reconciliation method is equivalent to the bottom-up approach. Interestingly, most differences are present at larger values of ϕ. Table <ref> presents the training and test rMSE values for the selected reconciliation methods and parameters, grouped by buckets of the AR parameter. This allows us to distinguish between high negative or positive correlation as well as almost random walks. We observe that most improvements occur at the top level of the hierarchy, while reconciliation at the bottom level yields worse results, especially out-of-sample. Overall, we notice similar improvements for the bottom-up approach compared to more sophisticated methods once the sample size is sufficiently large. Note that the highest improvements are observed for a large AR parameter across all methods. §.§.§ Deeper Hierarchy Table <ref> displays the training errors for a three-level hierarchy using fixed-order models. Note that in this scenario, the full covariance matrix cannot be estimated due to the simple models producing a singular covariance matrix of the base forecast errors. This issue also arises with automatically selected base models. For the other methods, we observe similar improvements at the top level. Interestingly, the spectral method based on dimension reduction performs exceptionally well, yielding better results than the bottom-up reconciliation method based on in-sample errors. Out-of-sample this relationship is turned over and the bottom-up approach generalizes more efficiently. §.§.§ Multi-Step Ahead As we extend the forecast horizon, the results shift, with the bottom-up approach performing worse compared to using the full covariance matrix or even the reduced spectral-based one, as shown in Table <ref>. This trend holds in-sample; however, out-of-sample, the situation changes. The bottom-up method then produces the best test relative errors, as previously observed. §.§.§ Odd Hierarchy Width So far, we have only considered even hierarchy widths such as {4,1} or {12,4,1}. These even aggregations result in a non-negative AR parameter at the top level, even if the bottom level model is generated with a negative one. Table <ref> shows the training and test relative errors for the odd width hierarchy {5,1}. We observe that for a negative AR parameter, the overall improvements are much more significant. In-sample, covariance-based methods still perform better in low sample size settings, with the difference becoming marginally small for larger sample sizes. However, the bottom-up method yields better results on the test set. §.§ ARMA Models of Higher Order For more complex models such as ARMA(2,2) and its aggregates, computing the covariance matrices of forecast errors becomes very tedious. Therefore, we focus on experimental evaluation for these cases to investigate if the implications of Theorem <ref> still hold. As the complexity of an ARMA model increases, identifying the parameter space that yields stationary models becomes non-trivial. It is particularly challenging to define stationary parameter combinations for p,q > 2. To address this, we randomly draw stationary parameters using the partial correlation function as described by <cit.>. For each combination of p ∈{1,2} and q ∈{0,1,2}, we randomly draw 100 sets of parameters ϕ_1, …, ϕ_p, θ_1, …, θ_q. To mitigate the randomness of each realization, we further simulate 20 time series for each of the 100 random parameter sets. Figure <ref> shows the in-sample rMSE values for the full covariance estimator as well as the bottom-up approach for various sample sizes of the top-level. The setting is h=1,k∈{4,1} and σ^2=1 as well as fixed-order models. As in the AR(1) for varying AR parameters, we observe equivalent reconciliation performance for a larger sample size for any ARMA(p,q) present. In the low sample size case we see that bottom-up performs worse with increasing model complexity. Interestingly, this difference becomes larger for higher model complexity. We also observe that the full covariance method can produce better forecasts on the bottom level. This improvement also increases with the complexity of the bottom level base model. Overall, the MA order does not seem as impactful as the AR order. Figure <ref> shows the test errors for the very same setting. As in the simple AR(1) case, the roles of bottom-up and using the full covariance matrix estimator switch and the bottom-up approach perform better the more complex the base bottom model is set up to be. In this analysis, we aggregate over the whole space of stationary models of a certain order. Hence we also take a look at the performance of 2-dimensional base models in a more detailed manner. Figure <ref> shows the mean training rMSE differences between the full covariance-based reconciliation and the bottom-up approach for the randomly drawn stationary AR(2) models. Based on this plot, there is no tendency for performance based on the space of the stationary parameters. Test errors are available in the Appendix in Figure <ref>. Similarly, Figure <ref> shows the training mean rMSE differences for ARMA(1,1) models. Test errors are available in the Appendix in Figure <ref>. § REAL DATA APPLICATIONS §.§ A&E Emergency Service Demand Following the data example of <cit.>, we illustrate this paper's work on the Accident & Emergency Service Demand dataset, available from the package in R. In this dataset, a number of demand statistics of A&E departments are recorded on a weekly basis from 2010-11-07 to 2015-06-07. Before any modeling, we perform some preprocessing. To ensure complete observations for the hierarchy, we remove the incomplete years 2011 and 2015, resulting in 208 weeks of data. Next, we decompose the weekly time series of interest into seasonal, trend, and remaining components using the function in R, and remove the seasonal component. For interpretability, we also demean the resulting non-seasonal weekly time series. We analyze the Total Attendances time series and aggregate it on a monthly basis, resulting in a small hierarchy with 52 months of data. The training data consists of the first 41 months, or 164 weeks, with the remaining data designated as test data. As before, we are focused on cumulative one-step-ahead forecasts at the top level of the hierarchy, which in this case would be month-by-month forecasts. Using automated model selection, the chosen models are ARIMA(0,0,0) and ARIMA(1,1,1), respectively. To stick to the framework of temporally aggregated ARIMA models, we fix the orders of the used models accordingly. This yields an ARIMA(1,1,2) model for the monthly time series. The resulting model on the top level gives an AICc value of 406.47 which is only around 0.6% worse than the automatically selected model, hence it still seems like an appropriate model. Table <ref> shows the corresponding errors. We observe better generability of the bottom-up approach compared to using the full covariance matrix. The spectral method does seem to perform quite well out-of-sample leading to similar results as the bottom-up approach. A common aspect is still the fact that each covariance-based reconciliation method achieves worse forecasts on the test set for the bottom level time series. Figure <ref> shows the transformed time series as well as the base and reconciled forecasts, split by training and test set for the bottom-up and full covariance approach. §.§ Wool Production Another popular dataset is the woolyrnq dataset, available from the package in R. It is about the quarterly production of woolen yarn in Australia, given in units of tonnes from March 1965 to September 1994. We aggregate the data to biannual as well as annual frequency yielding a 3-level hierarchy with k∈{4,2,1}. In order to have complete observations we remove the partially observed last year 1994. This then gives us 116 quarters, 58 half-years as well as 29 years of data. As previously, we split the data into 80% training data leading to 23 training years. In contrast to the A&E data, we do not perform any preprocessing besides de-meaning for interpretability purposes. A seasonality decomposition such as is not suitable for the annual time series, hence we do not perform it at all. Table <ref> presents the results for fixed order models. According to AICc, the most suitable model for the quarterly time series is an ARIMA(3,1,2) model, which is already quite complex. The theory of aggregated ARIMA models then gives us ARIMA(3,1,3) and ARIMA(3,1,4) models for the biannual and annual time series, respectively. Despite the relatively small sample sizes for the biannual and annual data, these high-complexity models do not seem to suffer from overfitting. Using automated model selection, the corresponding models would be ARIMA(0,1,0) and ARIMA(1,1,1), respectively, which produce very similar results. Therefore, we only present the results for the fixed-order case. Nevertheless, we observe similar effects as with the A&E data. The bottom-up approach performs worse on the training data compared to covariance-based reconciliation methods. On the test data, both the bottom-up approach and the full covariance method exhibit poor generalization, while the spectral and OLS methods perform better. Notably, the full covariance method generalizes even worse than the bottom-up approach, a consistent finding across all data examples and simulations. Figure <ref> shows the transformed time series as well as the base and reconciled forecasts, split by training and test set for the bottom-up and full covariance approach. §.§ Additional Datasets We run experiments on some additional datasets and give an overall summary of the results. Based on the forecasting literature, especially hierarchical forecast reconciliation, we select the following 5 datasets. * Energy <cit.>: Daily electricity generation per source, available from the author's GitHub repository[<https://github.com/PuwasalaG/Probabilistic-Forecast-Reconciliation>]. * Food <cit.>: Daily data from smart fridges with the goal of forecasting the demand for each fridge for the upcoming week in a one-step-ahead fashion. * M3 <cit.>: Quarterly data of the M3 competition. The data was obtained from the R package Mcomp <cit.>. * Prison <cit.>: Quarterly data about Australian prison population per state. * Tourism <cit.>: Monthly data about visitor nights in Australian districts, taken from GitHub[<https://github.com/daniGiro/ctprob>]. This selection of datasets covers a wide range of frequencies and domains, summarised in Table <ref>. To ensure a non-singular covariance matrix estimate in order to be able to compute the full covariance reconciliation method, we maintain a relatively low order of aggregation. Specifically, we aggregate the energy data into weekly data, the M3 data into annual data, and so on. For each time series, we hold out 20% of the data as test data. Table <ref> also presents the training and test rMSE values for the selected reconciliation methods, summarized by trimmed means and corresponding standard errors. However, this presentation of the results does not provide much insight into the underlying dynamics. We observe that in-sample, the full covariance method performs well, but it does not generalize effectively. Similarly, the bottom-up approach does not produce the best results on the training data and also yields sub-optimal forecasts on the test data, contrary to the simulations. Comparing the two approaches we do observe that the full covariance method generalizes worse than the bottom-up method, confirming our simulation findings. Finally, the more sophisticated approach of utilizing the spectral decomposition performs well out-of-sample. We conduct an accuracy ranking based on multiple comparisons with the best (MCB) test, introduced by <cit.>, for each dataset, divided into training and test data. Figure <ref> clearly demonstrates the statistically superior performance of the full covariance method compared to the bottom-up approach in-sample, while the performance difference becomes practically negligible on the test data, consistent with our theory and simulations. Additionally, Figure <ref> presents percentile plots comparing the four different approaches. These plots further illustrate that while the full covariance method performs well in-sample, its performance significantly deteriorates out-of-sample. Specifically, on the training data, more forecasts are improved by full covariance reconciliation, but this relationship largely reverses on the test data. § CONCLUSIONS In this paper, we explored the theoretical implications of applying the minimum trace reconciliation method within the context of temporal hierarchies. By examining temporally aggregated ARMA models, we demonstrated that the optimal reconciliation method, when based on the true covariance matrix, is equivalent to a bottom-up approach. Our extensive simulation studies tested this theory across various scenarios involving different model complexities, hierarchy structures, and levels of uncertainty. The findings support our theory, indicating that the bottom-up method is a viable approach. This aligns with numerous literature findings where the bottom-up approach consistently produces useful results in suitable settings. The simulation results also reveal that in-sample, covariance-based minimum trace reconciliation methods outperform the simple bottom-up approach. However, this relationship reverses out-of-sample, with the bottom-up approach generalizing better on the test data compared to the full covariance matrix across simulations and data examples. Further research is necessary to understand why this effect occurs so markedly. Additionally, other estimators were tested and showed improved performance over the full covariance matrix in certain settings, highlighting the potential for the ongoing research of new temporal hierarchical covariance estimators in the minimum trace approach. Overall, our work contributes to the field of temporal forecast reconciliation by linking it to temporally aggregated ARMA models. We have theoretically established that the bottom-up approach is the optimal reconciliation method and reinforced this with comprehensive simulation studies and data illustrations. This supports the use of the bottom-up method in both theoretical and practical applications. § COMPUTATIONAL DETAILS The simulations and data examples were carried out in R 4.3.0. The corresponding source code of this paper in the form of an R package is available from GitHub at <https://github.com/neubluk/FTATS>. For convenience, all datasets except the M3 dataset are included in the package. § DECLARATION OF GENERATIVE AI AND AI-ASSISTED TECHNOLOGIES IN THE WRITING PROCESS During the preparation of this work the authors used ChatGPT in order to improve readability and language. After using this tool/service, the authors reviewed and edited the content as needed and take(s) full responsibility for the content of the publication. § ACKNOWLEDGMENTS AND DISCLOSURE OF FUNDING We acknowledge support from the Austrian Research Promotion Agency (FFG), Basisprogramm project “Meal Demand Forecast” and Schrankerl GmbH for the cooperation and access to their data. We further acknowledge funding from the Austrian Science Fund (FWF) for the project “High-dimensional statistical learning: New methods to advance economic and sustainability policies” (ZK 35), jointly carried out by WU Vienna University of Economics and Business, Paris Lodron University Salzburg, TU Wien, and the Austrian Institute of Economic Research (WIFO). § CALCULATIONS AND PROOFS As in <cit.>, we illustrate this framework based on an AR(1) model. Let y_t∼AR(1) be centered at 0 with AR parameter ϕ∈(-1,1) and innovation variance σ^2. According to Eq. (<ref>) we obtain y_T^∗∼ARMA(1,1) for any k>1 and AR parameter β = ϕ^k. The MA parameter η as well as the noise σ_∗^2 are computed as follows. For lags 0,1 we compute the autocovariances of (1+η B)ϵ^∗_T with B=L^k and T(L)ϵ_t with the aggregation polynomial T(L) given by T(L) = 1-δ^k L^k/1-δ L1-L^k/1-L = ∑_i=0^k-1δ^i L^i ∑_j=0^k-1 L^j, with δ = ϕ^-1 being the inverse root of the corresponding AR polynomial and L being the lag operator such that Ly_t = Ly_t-1. Because the MA order is 1, all lags greater than 1 are zero. First note that T(L)ϵ_t = (1,ϕ,…,ϕ^k-1) [ 1 … … … 1 0 … … 0; 0 1 … … ⋮ 1 0 … 0; ⋮ ⋱ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ ⋮; k× k0 … … 0 1 k× (k-1)1 … … 1 ]^=A[ ϵ_t; ⋮; ϵ_t-(2k-2) ]. Next, we set up the equations based on the auto-correlation functions to determine η and σ_∗^2. To this end, the variances are computed to be γ^∗(0) = Var( (1+η B)ϵ^∗_T) = (1+η^2)σ_∗^2, which must be equal to γ(0) = Var(T(L)ϵ_t) =σ^2 (1,ϕ,…,ϕ^k-1)AA'(1,ϕ,…,ϕ^k-1)' = σ^2(∑_j=0^k-1(∑_i=0^j ϕ^i)^2 + ∑_j=0^k-1(∑_i=j^k-1ϕ^i)^2). Similarly, the lag 1 auto-covariances are γ^∗(1) = Cov( (1+η B)ϵ^∗_T, (1+η B)ϵ^∗_T-1) = ησ_∗^2, with needed equality to γ(1) = Cov( T(L)ϵ_t, T(L)ϵ_t-k) =σ^2 (1,ϕ,…,ϕ^k-1)ACA'(1,ϕ,…,ϕ^k-1) = σ^2(∑_j=1^k-1(∑_i=j^k-1ϕ^i∑_l=0^j-1ϕ^l)) where C =1/σ^2Cov( (ϵ_t …ϵ_t-(2k-2))', (ϵ_t-k, …, ϵ_t-k-(2k-2))' ) =[ 0_k× (k-1) 0_k× k; I_k-1 0_(k-1)× k ] Solving the system of equations γ(0)=γ^∗(0),γ(1)=γ^∗(1) using (<ref>)-(<ref>) yields σ_∗^2 = σ^2 (1,ϕ,…,ϕ^k-1)AA'(1,ϕ,…,ϕ^k-1)'/1+η^2 η = (1+η^2)ρ_1, where ρ_1=γ(1)/γ(0)=γ^∗(1)/γ^∗(0) denotes the auto-correlation value at lag 1. First, we compute the h-step forecasts of the disaggregated series for h=1,…,k. For the AR(1) process this can be done recursively and we obtain residuals given by e_t^(h) = ∑_i=0^h-1ϕ^i ϵ_t+h-i. The corresponding pairwise covariances are quickly computed for h_1≤ h_2 by Cov(e_t^(h_1),e_t^(h_2)) = σ^2 ∑_l=0^h_1-1ϕ^h_2-h_1+2l = σ^2 ϕ^h_2-h_11-ϕ^2h_1/1-ϕ^2, hence for 𝐞_t = (e_t^(1),…,e_t^(k))' we obtain the covariance matrix on the bottom level Cov(𝐞_t) = σ^2 ΦΦ'. For y^∗_T we perform a 1-step forecast, thus e^∗_T^(1)=ϵ^∗_T+1 with Var(e^∗_T^(1))=σ_∗^2. To compute Cov(e^∗_T^(1),e_t^(h)), we do as follows. First, write ϵ^∗_T+1=y^∗_T+1-β y^∗_T-ηϵ^∗_T, then for T=tk and j=1,…,k we have Cov(ϵ^∗_T+1, ϵ_tk+j) = ∑_i=0^k-1Cov(y_tk+k-i, ϵ_tk+j) = ∑_i=0^k-1∑_l=0^tk+k-iϕ^l Cov(ϵ_tk+k-i-l, ϵ_tk+j) = σ^2∑_i=0^k-jϕ^i = σ^2 1-ϕ^k-j+1/1-ϕ, since Cov(ϵ_tk+k-i-l, ϵ_tk+j)=σ^2 if l=k-i-j and 0 otherwise. Together, we obtain the temporal cross-covariances of Cov(e^∗_T^(1),e_tk^(h)) = Cov(e^∗_T^(1),∑_i=0^h-1ϕ^iϵ_tk+h-i) = σ^2/1-ϕ( 1-ϕ^h/1-ϕ - ϕ^k-h+11-ϕ^2h/1-ϕ^2), hence the cross-covariance vector is given by Cov(e^∗_T,𝐞_tk) = σ^2(1,…,1) Φ̃Φ̃. The minimizer of Eq. (<ref>) is given by G^∗=(S'W_1^-1S)^-1S'W_1^-1. First, note that W_1^-1S = [ 0_k'; (σ^2ΦΦ')^-1 ], due to Cov(e^∗_T,𝐞_tk) = σ^21_k' ΦΦ'. Then the minimizing G^∗ matrix is obtained to be G^∗=(0_k I_k) and hence SG^∗ = [ 0 1_k'; 0_k I_k ], which is exactly the bottom-up forecast for the aggregated series. § ADDITIONAL PLOTS apalike
http://arxiv.org/abs/2407.02766v1
20240703024933
Balancing Patient Privacy and Health Data Security: The Role of Compliance in Protected Health Information (PHI) Sharing
[ "Md Al Amin", "Hemanth Tummala", "Rushabh Shah", "Indrajit Ray" ]
cs.CR
[ "cs.CR" ]
Balancing Patient Privacy and Health Data Security: The Role of Compliance in Protected Health Information (PHI) Sharing Md Al Amin, Hemanth Tummala, Rushabh Shah, and Indrajit Ray Computer Science Department, Colorado State University, Fort Collins, Colorado, USA {Alamin, Hemanth.Tummala, Rushabh.Shah2, Indrajit.Ray}@colostate.edu ============================================================================================================================================================================================================================= § ABSTRACT Protected Health Information (PHI) sharing significantly enhances patient care quality and coordination, contributing to more accurate diagnoses, efficient treatment plans, and a comprehensive understanding of patient history. Compliance with strict privacy and security policies, such as those required by laws like HIPAA, is critical to protect PHI. Blockchain technology, which offers a decentralized and tamper-evident ledger system, hold promise in policy compliance. This system ensures the authenticity and integrity of PHI while facilitating patient consent management. In this work, we propose a blockchain technology that integrates smart contracts to partially automate consent-related processes and ensuring that PHI access and sharing follow patient preferences and legal requirements. Consent, Patient Privacy, Data Security, PHI Sharing, Provenance, Compliance, Blockchain, Smart Contract. § INTRODUCTION Electronic health record (EHR) systems have significantly improved healthcare services, such as enhanced collaboration among healthcare professionals, more accurate diagnoses, faster treatment, and convenient access to patient-protected health information <cit.>. EHR systems greatly facilitate the access and sharing of digitized healthcare information, allowing providers to easily exchange sensitive medical data with other professionals. Data sharing is essential for numerous aspects of patient care, including enhancing diagnosis and treatment plans through consultations with specialists, leveraging advanced technologies for more precise radiology and pathology analyses and diagnosis, elevating the overall quality of patient care, and others <cit.>. Furthermore, there are instances where healthcare data is utilized for research and marketing endeavors, provided specific requirements are fulfilled <cit.>. Health records can be shared through the EHR system using health information exchanges (HIE), specialized networks that rely on interoperable systems to share electronic health information seamlessly and securely <cit.>. Providers also share PHI through email or other electronic mediums <cit.>. Regardless of the PHI sharing mechanism, ensuring health data security and patient privacy is mandatory. Acquiring patient consent for healthcare information sharing is paramount for adhering to policy compliance, particularly concerning regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. and the General Data Protection Regulation (GDPR) in the E.U <cit.>. These regulatory frameworks emphasize protecting health information and upholding the patient's right to privacy. Patient consent is a cornerstone of these regulations, ensuring individuals have control over their health data and its dissemination. Under HIPAA, healthcare entities must obtain explicit consent before sharing healthcare data for purposes beyond treatment, payment, or healthcare operations. Similarly, GDPR enforces strict guidelines on data consent, processing, and privacy, offering individuals the 'right to be forgotten' and the autonomy to decide how their data is used and shared. From a policy compliance perspective, proper patient consent acquisition is a legal requirement and a trust-building measure, reinforcing the patient-provider relationship. It ensures transparency in data handling and builds patient confidence, knowing their sensitive information is shared respectfully and responsibly. As healthcare continues to integrate with various technologies, upholding these consent protocols is crucial for maintaining the security and privacy of patient data and adhering to global data protection standards. Unauthorized health data access and disclosure are common events in healthcare industries that increase security and privacy concerns. Table <ref> shows the number of compliance complaints received by the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) <cit.>. The primary reasons for the complaints are (i) impermissible uses and disclosures of PHI, (ii) lack of safeguards of PHI, (iii) lack of patient access to their PHI, (iv) lack of administrative safeguards of electronic PHI, and (v) use or disclosure of more than the minimum necessary PHI. These issues can be minimized by enforcing patients' consent for data access and sharing decisions and employing proper data protection mechanisms like encryption and anonymity. Consent lets patients control their healthcare journey, enabling them to make choices that align with their best interests and well-being <cit.>. Enhanced security and privacy technologies are essential for protecting patient data from being compromised, misused, or disclosed. However, substantial evidence indicates that the root of many unauthorized EHR access and sharing lies in inadequate policy adoption, implementation, and enforcement <cit.>. Often, users are granted access privileges inappropriately, whether intentionally or not. Policy compliance frequently falls short, and access control measures are not rigorously monitored or executed on time. A common oversight is the blanket assignment of identical roles and privileges to all employees, neglecting the nuances of individual patient-level policies. Moreover, auditing and monitoring practices are typically reactive, triggered only by serious complaints or legal mandates, rather than proactive and consistent. These policy specification and enforcement flaws significantly impact informed consent policies, underscoring the need for a more accurate and systematic approach to effectively protecting patient healthcare data and preserving privacy. It is essential to address the following concerns to guarantee compliance with the applicable privacy and security policies, industry best practices, and contractual obligations for sharing PHI: (i) Patient-level policies or consents are often not properly or timely enforced in healthcare data sharing. (ii) Patients lack assurance that consent for access or sharing purposes is carried out strictly by designated users, and only if the stipulated conditions are met are all other requests rejected. (iii) Data sharing over email or other mediums is insecure due to the absence of encryption or the use of inadequate and weak encryption algorithms and key sizes. (iv) The centralized hospital system serves as a singular source of truth and a potential single point of failure for managing audit trails. (v) The absence of a verifiable, unaltered record for consent execution and sharing PHI highlights the need for comprehensive consent provenance. (vi) Compliance assessments and audits are not conducted accurately and timely to check compliance status. To address the aforementioned challenges and requirements, this paper proposes a framework based on blockchain and smart contracts for managing and enforcing informed consent when sharing PHI with entities outside the treatment team. The approach ensures that PHI sharing occurs only when the sender has obtained the necessary consent from the patient and the sharing aligns with specific, predefined purposes. In addition to enforcing patient consent, this approach integrates other relevant security policies and industry best practices to ensure data protection. The HIPAA Security Rule mandates the requirements for transmission security are outlined under 45 CFR § 164.312(e)(1) Technical Safeguards <cit.>. However, the proposed approach does not directly guarantee security mechanisms like encryption for data protection. Instead, it leverages an honest broker who acts as a blind and secure entity to evaluate the intended PHI and certify its status as required protection mechanisms are satisfied or not <cit.>. The broker's attestation is then recorded in blockchain-based audit trails with other relevant activity data to support future compliance evaluations and validation. It supports using audit trails or provenance mechanisms based on blockchain, which is essential for keeping track of PHI-sharing activities. Moreover, the proposed framework provides a compliance-checking mechanism in data-sharing activities, ensuring adherence to applicable policies. Smart contracts, <cit.>, offer an automated, transparent system that upholds the integrity and accountability of the consent for sharing PHI. Through this smart contract-based approach, the proposed framework not only automates processes but also guarantees the accurate execution of informed consent, thereby enhancing the security and reliability of PHI sharing. Blockchain technology ensures the immutability of submitted records, safeguarding the integrity of the audit trail and enabling the detection of any unauthorized alterations. Blockchain security features, including non-repudiation, ensure that participants cannot deny their actions <cit.>. This work is the first to capture patients' informed consent for PHI sharing to ensure policy compliance through preserving provenance and conducting compliance checking. It also considers and enforces other applicable security policies and industry best practices mandated by the various laws, regulations, standards, and contractual obligations to meet the compliance requirements. Significant contributions include (i) implementing a mechanism to capture patients' consent for sharing healthcare data beyond the treatment team members. (ii) Storing obtained consents in decentralized and distributed networks (blockchain) to overcome a single point of truth sources and failure. (iii) Considering applicable security and privacy policies, regulatory requirements, and contractual obligations to ensure compliance-based sharing. (iv) Enforcing informed consent and applicable policies while making authorization decisions to share health records. (v) Equipping blockchain-based audit trail mechanisms to guarantee data provenance. (vi) Incorporating compliance assessment methods to identify compliance and non-compliance PHI sharing. (vii) Offering consent services to provide precise and comprehensive insights into the consent granted and the extent of its execution. The remainder of the paper is organized as follows: Section <ref> discusses some works that are related to this work. The proposed approach is explained in Section <ref> with the necessary components. Section <ref> gives the structure of audit trails and consent provenance. The compliance verification mechanism is explained in Section <ref>. Section <ref> discusses essential services for given executed consents. The experimental evaluations of the proposed approach are provided in Section <ref> for PPA integrity storage, patient contracts deployment, consent storage cost, and writing and reading time. Section <ref> concludes the paper with future research directions on consent management. § RELATED WORK Blockchain technology has increasingly been adopted in healthcare for various services, particularly for sharing protected health information among healthcare providers, patients, and other stakeholders. Blockchain facilitates a more efficient, transparent, and patient-centered delivery of healthcare services, making it an essential component in modern healthcare infrastructure. Fan et al., <cit.>, proposed a blockchain-based secure system, MedBlock, to share electronic medical records among authorized users. It provides security and privacy with access control protocols and encryption technology while sharing patient healthcare data. Shah et al., <cit.>, proposed a medical data management framework to facilitate data sharing. It gives patients full control over access to their medical data. It also ensures that patients know who can access their data and how it is used. Zhuang et al., <cit.>, addressed a blockchain-based patient-centric health information-sharing mechanism protecting data security and privacy, ensuring data provenance, and providing patients full control over their health data. However, consent structure and compliance requirements are not addressed, which are very important to give patients confidence in how their consent is executed and how data is protected. Alhajri et al., <cit.>, explored the criticality of implementing legal frameworks to safeguard privacy within fitness apps. By examining how various fitness apps handle consent and privacy policies, their research highlighted the crucial role of consent as outlined in the GDPR. The authors proposed the adoption of blockchain technology as a means to govern user consent for sharing, collecting, and processing fitness data, ensuring a process centered around human needs and compliant with legal standards. Nonetheless, the study failed to present a technical architecture for their blockchain-based proposal. Amofa et al. approached a blockchain-based personal health data sharing framework with an underlying mechanism to monitor and enforce acceptable use policies attached to patient data <cit.>. Generated policies are consulted with smart contracts to make decisions on when the intended data can be shared or otherwise. All entities cooperate to protect patient health records from unauthorized access and computations. Balistri et al., <cit.>, designed the BlockHealth solution for sharing health data with tamper-proofing and protection guarantees. They store the patient's healthcare data in a private database, and the hash of the healthcare data is stored in the blockchain to ensure data integrity. Shen et al., <cit.>, proposed MedChain, a blockchain-based health data sharing approach where data streams are continuously generated from sensors and other monitoring devices from various patients' bodies. The collected data are shared with laboratories and health organizations for diagnosis, advanced treatment, and further research. The above-mentioned papers summarized the application and benefits of using blockchain for healthcare data sharing and essential services. However, they failed to address the security and privacy requirements mandated by various laws and regulatory agencies, such as HIPAA and GDPR. The major requirements demand patient consent and proper protection, such as encryption, while sharing health records. In addition, it is crucial to maintain audit logs and check that those activities did not violate any policies. This paper proposes sharing informed consent as the smart contract for authorization with provenance and compliance-checking mechanisms. § PROPOSED APPROACH The main objective is to ensure compliance with applicable security and privacy policy for PHI sharing. To ensure compliance, we need proper policy enforcement, including maintaining provenance and performing compliance status checks promptly and properly. For enforcement, this paper considers patient-informed consent, where the sender has permission from the patient to share the intended PHI with the receiver for specific purposes. Also, proper data protection mechanisms are considered. However, instead of ensuring data protection directly, this work leverages an honest broker to verify and certify the data protection mechanism. PHI-sharing activities are recorded as audit trails to provide provenance and reconstruct events in a manner that reflects their actual occurrence. A private blockchain-based approach is proposed (Section <ref>). Finally, a blockchain consensus mechanism called Proof of Compliance (PoC) is approached, Section <ref>, for performing auditing. This audit rigorously examines the enforcement actions against the policy standards and informed consent, using the provenance data to verify and certify the policy's compliance status while sharing health records. The seamless connection between policy enforcement, provenance, and the auditing process forms the backbone of a secure and compliant system. §.§ Patient-Provider Agreement (PPA) The patient-provider agreement, or PPA, aims to determine who is responsible for what in treatment. A PPA is formed when a patient visits a hospital and is properly documented to deliver healthcare services. It differs from organization to organization. Healthcare organizations adjust what they need from patients and what they expect from them to match those needs, treatments, and responsibilities. This is done based on the nature and needs of treatment and services. Also, the components and representation of the PPA depend on the hospital or clinic. Figure <ref> shows the structure of a PPA, and Algorithm <ref> illustrates the gradual processes for creating a PPA with the required components. The main concept of PPA is adopted from <cit.>. The authors focused on consent management for medical treatment and diagnosis purposes, mainly for the treatment team members. They did not include patient consent and other requirements for health data sharing beyond the treatment team. This paper extends the PPA structure to analyze the requirements and formalize the consent components for PHI sharing. A PPA is formally composed of five tuples: PPA = (PC, PrC, TIC, SIC, ROC) satisfying the following requirements: (A) PC is a finite set of patient components containing the patient's personal information, contact information, mailing information, pharmacy information, billing and insurance information, emergency contact, and others. The patient is responsible for providing and maintaining these components' valid, accurate, and updated information. (B) PrC is a finite set of provider components, including the treatment team, prescription, and others. The provider is responsible for creating an effective team to provide appropriate care. Everything from treatment to insurance coverage and billing is considered during the patient treatment period. (C) TIC is a finite set of treatment informed consent components. It denotes that the patient has permitted the designated treatment team to access medical records. Treatment team members include doctors, nurses, support staff, lab technicians, billing officers, emergency contact persons, and others assigned by the authority. Some outsider members are insurance agents, pharmacists/pharmacy technicians, doctors/lab technicians from another hospital, etc. (D) SIC is a finite set of sharing informed consent components. It denotes the patient's consent to sharing medical data for a specific purpose. Both the sender and the receiver must have consent. The primary purpose of this work is SIC, including (i) identifying, capturing, and storing consent components, (ii) enforcing consents with other applicable security policies and industry best practices to ensure policy compliance while making PHI-sharing decisions, (iii) defining and capturing provenance information with the enforced consents to maintain audit trails, (iv) performing compliance checking using consensus mechanisms; (v) providing services for both given and executed consents, etc. It does not consider other components: PC, PrC, TIC, and ROC. (E) ROC is a finite set of regulatory and other components. It has applicable security and privacy policies to comply with the requirements of local government, state government, federal government, foreign government, and regulatory agencies (HIPAA, GDPR) if necessary. It also includes contractual obligations in some cases. ruled Comment/* */ §.§ Sharing Informed Consent (SIC) Before approving, patients need to know clearly about the sharing informed consent, particularly who can share which PHI with whom for what purposes—and also the protection mechanism while sharing PHI during transmission over the network. Figure <ref> shows the SIC conceptual framework structure. Sharing informed consent is formally composed of four tuples: SIC = (S, R, PHI, P) satisfying the following requirements: (a) S is a finite set of authorized senders denoted as { S_1, S_2, S_3, ......S_s} for s number of senders. The sender can share certain healthcare data with the receiver, who has permission from the patient. The sender may be a member of the patient treatment team or anyone from the provider. (b) R is a finite set of authorized users who receive protected health information from authorized senders. A finite set of r number authorized receivers denoted as { R_1, R_2, R_3, ......R_r}. The receiver may be from other hospitals, labs, medical research institutes, pharmaceutical companies, marketing departments, government officials, etc. (c) PHI is a finite set, d number, of health data denoted by { PHI_1, PHI_2, PHI_3, ......PHI_d}. It is an electronic version of a patient's medical data that healthcare providers keep over time. They are protected health information and contain sensitive patient information. PHI must be protected from any kind of unauthorized access, disclosure, and sharing. Table <ref> shows ten (10) types of PHI, considered for each patient, with PHI ID, name, description, and potential creators. (d) P is a finite set of purposes. It indicates the objective of the PHI sharing by the senders with the receivers. Receivers must use the received PHI for the intended purposes. A finite set of purposes, a p number, can be denoted as { P_1, P_2, P_3, ......P_p}. The objective of sharing protected health information outlines the specific reasons for its sharing. The recipient must utilize the shared PHI exclusively for its designated purpose. The potential reasons for sharing PHI in this study include, but are not limited to: (i) Treatment: Providers or patients need to share PHI with other providers from external hospitals to provide better treatment. Also, patients must move to different regions, like states or countries, due to family movement, job transfers, or new jobs. Patients need to share or transfer healthcare data from the previous providers to the current. (ii) Diagnosis: Present providers sometimes need more skilled human resources, appropriate machinery, instruments, or sophisticated technology to diagnose disease. But it is urgently required to do that to give proper treatment and services to save patients' lives or minimize damages. Patients' health data must be transferred or shared with other providers or labs to complete diagnosis and make proper treatment plans for the patients. (iii) Marketing: Healthcare data sharing for marketing purposes involves using patient data to promote healthcare services, products, or initiatives. This can help healthcare providers tailor their services to patient needs, inform patients about new treatments or products, and improve patient engagement. Only the receiver entity can use the shared data as intended and should not share it with other associates for extended business purposes. (iv) Research: Sharing PHI for medical research purposes holds significant potential for advancing medical knowledge, leading to breakthroughs in understanding diseases, improving and developing new treatments, improving healthcare systems and services, and enhancing patient outcomes. Patients' privacy and rights must be respected. Other purposes might exist depending on the nature and requirements of the treatment, patient conditions, provider business policy, etc. This study considers only the four purposes mentioned above. After receiving shared data, the receiver performs specified operations to complete the job. It is assumed that the receiver cannot share data with other users who do not have permission from the patients. More specifically, the receiver's healthcare system does not allow the sharing of PHI by any means, like printouts, email, or screenshots. However, this paper doesn't provide detailed mechanisms or techniques for preventing data sharing without patients' consent at the receiver end. §.§ SIC Smart Contract Deployment Once a Patient-Provider Agreement, or PPA, is created and stored in the repository, all sharing informed consent components are deployed to the blockchain network. For each patient, there is one smart contract that contains all consents for that particular patient. If there isn't a smart contract, the authority deploys one, transfers ownership to the patient, and updates the contract address to the patient's profile and hospital systems. The contract address is an identifier for a smart contract in the blockchain network. This smart contract-based approach provides an automated system and guarantees the integrity and accountability of the deployed consents. Once consents are deployed or added to the smart contract, they cannot be altered. The authorization module needs to access these smart contracts to make decisions considering the sender, receiver, purpose attributes, environmental factors, organizational policies, regulatory frameworks, etc. Upon finalizing the PPA, it transforms and secures storage in a PPA repository. Subsequently, an integrity marker, such as a hash (ℍ_PPA_i) generated by the Algorithm <ref>, is stored on the blockchain alongside the PPA ID for later modification detection. These are depicted in Steps 2 and 3 in Figure <ref>. The Smart Contract Deployment Unit (SCDU) then gathers all components of the informed consent from the PPA (Step 4). It verifies their integrity to ensure no deliberate or accidental alterations have occurred (Step 5). As a secure entity, the SCDU does not alter consent components, noting that any modification invalidates the consent. If the consents remain unmodified, the SCDU creates and deploys the corresponding smart contracts on the blockchain network (Step 6) and then updates the patient's profile and the hospital system (Step 7). Users can make queries with the required credentials regarding informed consent and get responses in Step 8 from the blockchain network. §.§ Honest Broker, Applicable Policies and Industry Best Practices Alongside patient consent, the proposed approach incorporates relevant security policies and industry best practices before sharing protected health information. For instance, a security policy might require a data protection mechanism during data transfer between systems. For treatment and diagnosis purposes, encryption is a recommended protection method. As an industry best practice, the Advanced Encryption Standard (AES) is preferred over the Data Encryption Standard (DES). Furthermore, it advises using a robust, lengthy encryption key (256 bits) rather than a weaker, shorter one (64 or 128 bits). The sender must encrypt the intended PHI using the AES-256 algorithm while leaving it in the system for treatment and diagnosis. However, this proposed approach does not encrypt the healthcare data directly or ensure a strong key size while encrypting the intended healthcare data. Also, it does not address the key management mechanisms such as creation, storage, sharing, updating, deleting, etc. It is assumed that the key management is done securely and separately. Similarly, anonymity is a recommended protection method for marketing and research purposes, where patient identifiers must be removed before sharing. The targeted PHI must be anonymous using proper techniques and tools before sending the data from the host healthcare system to the receiver. The host system indicates where patients' PHI is created or presently stored. Healthcare organizations deploy appropriate encryption and anonymity mechanisms. This study does not directly ensure PHI encryption and anonymity. Instead, this approach leverages an honest broker, a trusted entity that evaluates the encryption algorithm, key size, and data anonymity status <cit.>. After checking, the honest broker certifies or attests to the status, which is recorded in audit trails as proof for policy compliance verification, along with other components like sharing informed consents, timestamps, etc. Depending on the specific policies and practices of the healthcare organization, this broker could be either a human or a non-human (automated) entity. The honest broker's role is confined; it does not share healthcare data with other entities. It also does not analyze data to gain insights about the patient or share those insights. Effectively, it functions as a 'blind' entity, ensuring encryption standards and the anonymity status of the PHI without engaging with the actual data content. §.§ PHI Sharing Authorization Process Consent enforcement ensures that related consents are executed while making decisions for the PHI sharing requests. All consents are stored on the public blockchain network as smart contracts and cannot be enforced until they are called. The authorization module (AM) considers sharing informed consent with applicable policy and required attributes while making decisions. The attributes may be subject, object, operation, and environmental attributes. The sender must provide the necessary credentials for identification and authentication. Figure <ref> shows the informed consent enforcement for PHI-sharing authorization. A sender submits a data sharing request to the PHI sharing unit in Step 1. Sharing unit forwards request to authorization module for decision in Step 2. It also requests that the PHI storage unit send the intended PHI to the protection mechanism unit in Steps 2a and 2b. The honest broker receives encrypted or anonymized data in Step 3. After analyzing, it sends a report to AM in Step 4. The AM queries the blockchain network through the corresponding smart contract to get sharing informed consent information for the sharing request in Step 3a and 4a. It also makes queries for requests related to applicable policies and required attributes in Steps 3b and 3c. It receives the policy and attributes in Steps 4b and 4c. After evaluating, it makes an authorization decision and sends it to the sharing unit in Step 5. If the request is approved, the sharing unit gets encrypted or anonymized data based on the purpose in Steps 7a and 7b. Then, it delivers the intended PHI through email or protocol to the receiver in Step 8. The audit trail recording unit collects logs from AM in Step 6a and from the honest broker in Step 6b. It combines logs and stores as an audit trail in Step 6c in Private Audit Blockchain. Section <ref> discusses block structure and others. The compliance status checking is done in Steps 9a, 9b, and 9c by the Proof of Compliance consensus mechanism. Compliance status reports are produced in Step 10. Section <ref> discusses the required mechanism. For this study, it is considered that the authorization module is not compromised or tampered with. It is the reference monitor for making access decisions and must be tamper-proof <cit.>. Also, the communication channel between AU and the smart contract access points or apps is secured from malicious users. § PHI SHARING PROVENANCE Enforcing an applicable set of policies is crucial, but preserving data provenance to show adherence to these policies is also essential. Nevertheless, policy compliance cannot be quantified or confirmed in isolation. An independent auditor conducts a thorough policy audit to verify compliance with the policy, utilizing the available provenance data to ascertain and certify the policy's compliance status. For an accurate policy compliance assessment, two critical elements must be diligently maintained: (i) consent and policy lineage and (ii) PHI sharing activity audit trails. This section contains the detailed provenance mechanisms dedicated to preserving the policy lineage's integrity and ensuring the audit trails' authenticity. §.§ Consent and Policy Lineage Policy lineage involves a comprehensive record of all policies that guide the authorization module's decisions. It's a transparent and traceable record of the policy history and its application in decision-making processes. For this study, sharing informed consent is mainly considered for decision-making. Since all consents are deployed as smart contracts, blockchain networks can create policy lineages. However, this paper does not consider other HIPAA-related policies, such as physical security, provider training, etc <cit.>. §.§ PHI Sharing Activity Audit Trails Integrity in policy enforcement ensures that events are documented faithfully, reflecting the sequence and nature of actions taken. This authenticity is crucial for transparency and accountability. Provenance plays a key role by offering a detailed and unalterable history of policy enforcement actions as they are carried out, safeguarding against any tampering of records. The alteration of audit trails or unauthorized access to healthcare data is strictly prohibited to maintain the sanctity of the process. Maintaining the integrity of the audit trail is essential for policy compliance assurance. If integrity is compromised, checking compliance status to find compliance and non-compliance cases is questionable. The blockchain provides these requirements as ledger properties. This work adopts private blockchain as an audit trail storage system. Figure <ref> illustrates the private audit blockchain's block components and structure. Each block has a block header part that contains block metadata and a data part that stores the audit trail data. Each audit trail has five components: (i) audit trail ID; (ii) informed consent ID or SIC ID; (iii) honest broker ID; (iv) honest broker report; and (v) timestamp data. The audit trail ID provides unique identifiers; the informed consent ID, or SIC ID, indicates the consent that is executed to share the intended PHI. From SIC ID, it is possible to get the components: sender, receiver, PHI, and purpose. The honest broker ID indicates which broker certifies or attests to the intended PHI's protection status (encryption or anonymity). Finally, the timestamp means the time when the sharing authorization is done. Steps 6a, 6b, and 6c in Figure <ref> show the process of capturing audit trails from the authorization module and honest broker. Enforcement activity data is collected and stored in a private blockchain known as an audit blockchain as immutable records to ensure consent provenance and maintain compliance. The private blockchain network is managed and maintained by an authority, which means reading and writing permissions are given to limited participants or users. In this case, the trust and transparency of the private blockchain are questionable. It doesn't provide a public eye to maintain trust and transparency. Storing audit trails on the public blockchain gives trust and transparency, which is another issue to consider. Firstly, audit trails contain sensitive information like user activities, and storing them on a public blockchain creates security and privacy concerns. Secondly, audit trails produce enormous amounts of data, which requires a lot of money to store on the public blockchain. This is not feasible from a business perspective, as it increases business operation, treatment costs, and service charges. To overcome the aforementioned issues, this research stores audit trail data on a private blockchain called the private audit blockchain. Then, it stores the private audit blockchain block ID and hash as integrity on the public blockchain. Storing block ID and integrity requires a small cost and provides trust and transparency. Any modifications to private audit blockchain data can be detected by comparing the block's current and stored hashes with those on the public blockchain. Figure <ref> shows the private and public blockchain relationship for storing audit block ID and integrity in a public blockchain like Ethereum. We have configured a private blockchain that is based on the Ethereum client <cit.> with the necessary smart contracts and API for capturing and storing audit trail data in the audit blockchain. § COMPLIANCE VERIFICATION Enforcing applicable policies and maintaining audit trails are insufficient to ensure policy compliance. There must be some mechanism to check compliance status using deployed and enforced policies with audit trails. The compliance checker must be an independent and separate entity from the policy enforcer and audit trail unit. This paper proposes a blockchain consensus mechanism to perform compliance-checking operations on the audit trails using deployed sharing informed consents (SIC) and other applicable policies. The consensus mechanism, called Proof of Compliance (PoC), is governed by a set of independent, distributed, and decentralized auditor nodes. Section <ref> discusses the sharing informed consent structure and deployment process as the smart contract in the public blockchain. Section <ref> gives the audit trail capturing and storing mechanism. Figure <ref> depicts the transaction structure of the Proof of Compliance consensus mechanism. The PoC takes input from an audit trail that contains (i) audit trail ID, (ii) informed consent ID or SIC ID, (iii) honest broker ID, (iv) honest broker report, and (v) timestamp data. Applicable policy and sharing informed consent are retrieved from the policy repository and public blockchain to check the status of each audit trail. After verifying, each auditor node determines the compliance status for each transaction. There are three compliance statuses: (i) compliant, which indicates there are no security and privacy policy violations; (ii) non-compliant means there is a policy violation, and (iii) non-determined defines that required information is not available to check status. The auditor nodes can be hospitals, various governments, regulatory agencies, insurance companies, business associates, and others. They do not store audit trail data and are responsible for maintaining compliance status for each transaction. Reports from all auditor nodes are collected and combined for the final decision. Algorithm <ref> shows the core functionalities of PoC: signature verification and order, transaction validation, policy compliance verification, and ledger modification. Due to page constraints, we do not include detailed protocols, communication mechanisms, and synchronization techniques. They are our future research communications with performance evaluations for compliance accuracy measurements, data security and privacy, and others. ruled Comment/* */ § SIC PROVENANCE SERVICES Patients need to be provided with the specifics of their given sharing informed consent: who can share what PHI with whom, and for what purposes? Additionally, patients should understand the execution of their consent, including the details of who shares which healthcare data, the timing of these actions, and others. They should also know whether those sharing activities comply with the applicable security and privacy policies, regulatory requirements, industry best practices, contractual obligations, etc. This section outlines the services related to the given and executed consent that patients can access within the proposed framework, provided they have the necessary credentials. The primary goal of provenance services is to ensure patients receive accurate and comprehensive information and have confidence regarding their given and executed informed consent. §.§ Given Consent Services In this scope, patients can access the list of all the given consents for sharing healthcare data to date. These consents are in their original state and may or may not be executed for making data-sharing decisions. Patients can see the list where each consent contains information about who the sender is, who the receiver is, what the protected healthcare information is, and the purpose of sharing healthcare data when the sharing informed consent is given. Given consent services can be delivered: (i) sender-oriented, (ii) receiver-oriented, (iii) PHI-oriented, and (iv) purpose-oriented. For example, patients can have sender-oriented consent services that include all the consents given to a particular sender or a group of senders. Figure <ref> depicts sender-oriented given consents for Donald, who has permission to share PHI with various receivers. Figure <ref> shows the PHI-oriented given consents for health record PHI-1008. §.§ Executed Consent Services After generation, all consents may or may not be executed to share healthcare data. A consent is executed when a sender wants to share PHI with the receiver when there is a need for it to serve the purpose included in the consent. If consent is executed, other information is stored in addition to the consent, like an honest broker ID, a pertinent policy status that the broker has certified, a timestamp, etc. Executed consent services can be provided: (i) sender-oriented, (ii) receiver-oriented, (iii) PHI-oriented, and (iv) purpose-oriented. For example, a patient may need to know the executed consent for a particular receiver. Figure <ref> shows receiver-oriented executed consents for Steve with senders and timestamps. Figure <ref> depicts purpose-oriented executed consents for treatment with sender, receiver, and timestamp. §.§ Service Delivery to Patients Patients will interact with the system through interfaces like GUIs or apps supported by wallets like Coinbase and MetaMask for transaction signing and data access management. These wallets safeguard users' private keys and credentials. The system accommodates various user types, including those requiring tailored interfaces, such as seniors, physically disabled individuals, minors, and others. Healthcare providers may address the specific needs of these diverse users and can develop apps and software to provide services. Patients' devices and apps are assumed to be secure against unauthorized access, and communication with the blockchain is also protected. § EXPERIMENTAL EVALUATION The Ethereum Virtual Machine (EVM) based three blockchain test networks (Arbitrum, Polygon, and Optimism) are chosen for the experiments. We developed and deployed smart contracts for storing and retrieving PPA integrity and informed consent in test networks. Ethereum's Remote Procedure Call (RPC) API services are employed for deploying smart contracts and performing transactions on these networks <cit.>. Utilizing public RPC eliminates the need to maintain a blockchain node for contract interaction, assuming minimal resource usage (CPU, HDD, bandwidth) on the local machine. We used Metamask wallet to sign and authorize transactions using ETH and MATIC faucet tokens as gas. Healthcare providers may invest in infrastructure such as blockchain nodes, web interfaces, and mobile applications for seamless service interaction between patients and healthcare systems. Storing informed consent on public blockchains like Ethereum incurs direct monetary costs. Patients, insurance companies, and others can split these costs, like those for doctor visits, medications, and laboratory tests. The following discusses gas consumption and time requirements. §.§ Gas Consumption Gas is needed for any activity on the Ethereum network involving writing data or changing the state of the blockchain. Smart contract deployment and function calling costs to write data on the blockchain network are considered in this work. A contract is deployed for each patient separately to manage consent-related queries efficiently. The cost of smart contract deployment is proportional to the size of the code <cit.>. This is a one-time cost for a single-contract deployment. How much it costs to call a function depends on how many times it is called and how much data needs to be stored or changed on the blockchain network. Figure <ref>, <ref>, <ref>, <ref>, and <ref> show the contract deployment and consent storage costs in gas (token) and USD for three test networks. §.§ Time Requirements Blockchain-based applications require block data writing and reading time requirements. Writing time includes smart contract deployment and data addition. Table <ref> shows the writing time for various consent numbers for the test networks. The reading time indicates the required time to get data from the block of the blockchain ledger. All the read calls of smart contracts are gas-free. Table <ref> shows the test network's reading time for various consent numbers. The same smart contracts and consents are used for all test networks. Maintaining a node locally can reduce the reading time from the network where block data can be accessed in real-time. The system continuously synchronizes with the blockchain network to update the ledger data. The providers can maintain local nodes for faster authorizations. § CONCLUSIONS Sharing patient health data is beneficial for improving medical care, diagnosis, and other essential services. However, keeping this information private and secure is important. Different policies from various authorities help ensure the privacy and security of this health data. Complying with these policies ensures that safety measures are working. Getting patients' informed consent is also critical to protecting their privacy and giving them control over sharing their information. Patients need to understand fully how their data is shared. Patients should also feel confident that strong safeguards are in place to protect their data. Using smart contracts to manage patient consent is a promising way to securely and privately share health data. These systems let patients control their health records and agree to how doctors and others use them. Blockchain technology improves these systems by providing security, efficiency, decentralization, transparency, and immutability. This enhances the trustworthiness and responsibility of sharing healthcare data among everyone involved. Looking forward, our objective is to provide functional mechanisms for essential consent management operations for data sharing and enhancing patient care and services. Management operations generate, modify, withdraw, expire, and archive consent. Improper consent can cause sensitive data disclosure or prevent getting services. Consent generation must be done carefully. It is necessary to modify a given consent due to improper components like the receivers or purposes. In this situation, a modified new consent must be deployed, while the old consent must be moved to the achieving repository. § ACKNOWLEDGEMENTS This work was partially supported by the U.S. National Science Foundation under Grant No. 1822118 and 2226232, the member partners of the NSF IUCRC Center for Cyber Security Analytics and Automation – Statnett, AMI, NewPush, Cyber Risk Research, NIST, and ARL – the State of Colorado (grant #SB 18-086), and the authors’ institutions. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or other organizations and agencies. IEEEtran
http://arxiv.org/abs/2407.02897v1
20240703081800
Effects of different loading on the bifurcation of annular elastic rods: theory vs. experiments
[ "Matteo Gaibotti", "Davide Bigoni", "Arsenio Cutolo", "Massimiliano Fraldi", "Andrea Piccolroaz" ]
physics.class-ph
[ "physics.class-ph" ]
Measurement of the branching fraction of the decay Lev Tankelevitch July 8, 2024 =================================================== § ABSTRACT The bifurcation problem of a circular Euler-Bernoulli rod subject to a uniform radial force distribution is investigated under three distinct loading conditions: (i.) hydrostatic pressure, (ii.) centrally-directed, and (iii.) dead load. Previous studies on this apparently familiar' structural problem have yielded controversial results, necessitating a comprehensive clarification. This study shows that results previously labelled as correct' or wrong' simply refer to different external constraints, whose presence becomes necessary only for the two latter loads, (ii.) and (iii.). Moreover, the paper presents the first experimental realization of a circular rod subjected to centrally-directed loads. The experimental findings align with the theoretical predictions and show the exploitation of a new type of load acting on a continuous structural element. The feasibility of this load is demonstrated through the use of inextensible cables and opens the way to applications in flexible robotics when cables are used for actuation. § INTRODUCTION The in-plane bifurcation problem of circular elastic rods and arches, assumed axially-inextensible and loaded by hydrostatic pressure, is an old topic, which attracted considerable attention in civil and mechanical engineering (see the initial works by Bresse <cit.> and Lévy <cit.> and later among many others, <cit.>). Driven by new applications on minimal surfaces <cit.> and the biology of several different natural structures <cit.>, the issue has seen renewed interest. Radial and uniform loads, leave an axially-inextensible circular rod undeformed and subject to a trivial state of pure normal compressive force until buckling occurs, usually in the form of an ovalization. However, initially identical load distributions may differ in the way they react to the deformation. In particular, hydrostatic pressure is just one type of uniform and radial load that a ring can experience. Specifically, the following three types of loads have been so far investigated for the circular rod <cit.>. (i) Hydrostatic pressure, which remains orthogonal to the tangent to the deformed configuration of the rod. Moreover, the resultant force acting on the elementary arc of the rod changes proportionally to a variation in its length (which cannot occur for axial inextensibility). (ii) Centrally-directed load, which remains directed towards the initial centre of the ring. Moreover, the resultant force acting on the elementary arc of the rod is independent of a variation in its length. This load can be visualized (and implemented in practice, as demonstrated in the present article) as several inextensible ropes pulling the rod and passing through a fixed point, coincident with the initial centre of the ring. (iii) Dead load, which remains directed along the normal to the rod in its undeformed configuration. Moreover, the resultant force acting on the elementary arc of the rod is independent of its deformation. All loads (i.)–(iii.) become critical for buckling at a sufficiently high intensity, and infinite bifurcations arise at increasing values. The critical radial load Π_cr, corresponding to bifurcations, occurring in all possible modes and under every constraint externally applied to the rod, can, in any case, be expressed as <cit.> Π_cr=k^2B/R^3 , where R is the radius of the circle defining the undeformed configuration of the rod, B=EJ its bending stiffness (equal to the product between the Young's modulus E and the second moment of inertia of its cross-section J), and k^2 is a dimensionless constant depending on the type (i.)–(iii.) of radial load, on the selected mode of bifurcation, and on the constraints applied to the rod (differences in external constraints have been considered in <cit.>). In particular, the following values have previously been reported: * k^2=3 for hydrostatic pressure (i.) <cit.>, * k^2=9/2 or k^2 ≈ 6.47 for centrally-directed load (ii.) <cit.>, * k^2 ≈ 0.701 or k^2= 4 for dead load (iii.) <cit.> . The latter values (ii.) and (iii.) are controversial and are reported in the literature as correct' or wrong' <cit.>. The purpose of the present article is twofold: * first to show that all the values for the buckling radial loads (ii.) and (iii.) so far presented are correct, but refer to different external constraints, imposed to prevent rigid-body displacements. In particular, while any system of statically-determined external constraints leaves the bifurcation problem under hydrostatic pressure (i.) unaffected, consideration of constraints becomes important for loads (ii.) and (iii.), because their application strongly changes the bifurcation loads and modes. Moreover, differently from centrally-directed load (ii.), the dead load (iii.) makes the structure unstable with respect to rigid-body rotations, so that in this case external constraints cannot be avoided; * second, an experimental set-up is proposed to realize the load (ii.), showing that the experimental values of the critical load match with accuracy the theory. The realization of the centrally-directed load provides the design of a structure subject to a type of load proposed a long time ago and never achieved before. The device's scheme designed to reproduce the centrally-directed load is reported in Fig. <ref>, together with an ancient toy based on a similar idea. The present article's results reiterate the importance of modelling external loads and clarify a controversial structural problem. Moreover, a new experimental strategy is introduced to attain centrally-directed loads. Though recently reconsidered <cit.>, centrally-directed loads have been only scarcely analyzed, but they are of interest in the design of flexible robotic arms driven by cables, or pulley systems applied to deformable elements. § GOVERNING EQUATIONS FOR THE ANNULAR ROD Consider an inextensible and unshearable circular rod, characterized by a radius R, a bending stiffness B, and a Cartesian reference system with axes x_1 and x_2 centred at the centre O of the structure. The arc length ds=Rdθ is defined with respect to a polar coordinate system (r, θ). At every point of the rod, a tangential 𝐭_0 and a radial 𝐦_0=𝐭_0×𝐞_3 unit vectors are introduced, where 𝐞_3 is the out-of-plane unit vector, Fig. <ref>. In the Cartesian frame of reference (x_1,x_2) described by the unit vectors 𝐞_1 and 𝐞_2, the tangent and the normal unit vectors at a point on the rod assumes the form 𝐭_0=-sinθ𝐞_1+cosθ𝐞_2 , 𝐦_0=cosθ𝐞_1+sinθ𝐞_2 . The displacement vector describing points belonging to the rod is 𝐮=u_θ 𝐭_0+u_r 𝐦_0 , where u_r and u_θ are the radial and tangential components, with respect to the orthogonal unit vectors 𝐞_r=cosθ 𝐞_1 + sinθ 𝐞_2, 𝐞_θ=-sinθ 𝐞_1 + cosθ 𝐞_2, which define the radial and circumferential directions. The axial deformation ϵ, the cross-section rotation Φ and the change of curvature χ at every point of the rod are governed by <cit.> ϵ=u_r/R+∂u_θ/∂s, Φ=∂u_r/∂s-u_θ/R, χ=-∂Φ/∂s, respectively. Assuming the inextensibility of the rod, ϵ=0, and introducing the constitutive equation for in-plane deflection, it follows from (<ref>)_1 u_r=-∂u_θ/∂θ, χ=M/B , where M is the bending moment internal to the rod. When external pressure is applied, the ring is only subject to a uniform internal compressive force N_0=-Π R, while both the shearing force T_0 and the bending moment M_0 are null. The equilibrium equations of any curved rod (not necessarily circular), subject to the load 𝐪, are (see details in <cit.>) [ ∂N_0∂s+T_0R=-𝐪·𝐭_0, N_0R-∂T_0∂s=𝐪·𝐦_0,; ∂M_0∂s=T_0(u_rR+∂u_θ∂s+1)-N_0(∂u_r∂s-u_θR) , ] so that, assuming the curved configuration as reference in a relative Lagrangian description (u_r=u_θ=0), the material time derivative leads to the incremental equilibrium equations ∂Ṅ_0∂s+Ṫ_0R=-𝐪̇·𝐭_0 , Ṅ_0R-∂Ṫ_0∂s=𝐪̇·𝐦_0 , ∂Ṁ_0∂s=Ṫ_0+Π R(∂u̇_r∂s-u̇_θR) , where a superimposed dot denotes an increment, while the load increment, 𝐪̇, depends on the type of load (i)-(iii). The material time derivative of equation (<ref>)_2 and the use of the relation (<ref>)_2-3 yield ∂Ṁ_0/∂s=-B(∂^3u̇_r/∂s^3+∂u̇_r/∂s1/R^2) . Therefore, a substitution of equation (<ref>) into equation (<ref>)_3, allows to reduce all equations (<ref>) into one equation describing the incremental response of a circular rod <cit.> as ∂^6u̇_θ/∂θ^6+(2+k^2)∂^4u̇_θ/∂θ^4+(1+2k^2)∂^2u̇_θ/∂θ^2+k^2u̇_θ+𝔖=0 , u̇_r + ∂u̇_θ/∂θ = 0 , where 𝔖=R^4/B(∂𝐪̇/∂θ·𝐦_0+2𝐪̇·𝐭_0) . The incremental load 𝐪̇ in equation (<ref>) is one of the incremental loads corresponding to (i)–(iii). These can be written as the equations (3.18)-(3.20)_2 derived and reported in <cit.> and leading to 𝐪̇ = -Π/R×{[ (∂^2u̇_θ∂θ^2 + u̇_θ)𝐭_0 for hydrostatic pressure (i.) ,; u̇_θ 𝐭_0 for centrally-directed load (ii.) ,; 0 for dead load (iii.) . ]. A substitution of equations (<ref>) into the equation (<ref>) yields 𝔖=-k^2 ×{[ ∂^2u̇_θ∂θ^2+u̇_θ for hydrostatic pressure (i.) ,; u̇_θ for centrally-directed load (ii.) ,; 0 for dead load (iii.) . ]. The internal axial Ṅ_0 and tangential Ṫ_0 incremental forces and bending moment Ṁ_0 in equations (<ref>) have the following form Ṅ_0=B/R^3(∂^5u̇_θ/∂θ^5+∂^3u̇_θ/∂θ^3)+Π(∂^3u̇_θ/∂θ^3+∂u̇_θ/∂θ), Ṫ_0=B/R^3(∂^4 u̇_θ/∂θ^4+∂^2 u̇_θ/∂θ^2)+Π(∂^2u̇_θ/∂θ^2+u̇_θ) , Ṁ_0=B/R^2(∂^3 u̇_θ/∂θ^3 + ∂u̇_θ/∂θ) . § BIFURCATION ANALYSIS Depending on the behaviour of the externally applied radial load during the deformation, <cit.> the following cases have to be analyzed. (i) For hydrostatic pressure, the governing equation is<cit.> ∂^6u̇_θ/∂θ^6+(2+k^2)∂^4u̇_θ/∂θ^4+(1+k^2)∂^2u̇_θ/∂θ^2=0 , and its general solution can be written as u̇_θ(θ)=a_1+b_1 θ+a_2 cosθ+a_3 sinθ+b_2 cosωθ+b_3 sinωθ , where a_1–a_3 and b_1–b_3 are integration constants and ω=√(k^2+1). (ii) For centrally-directed load, the governing equation is <cit.> ∂^6u̇_θ/∂θ^6+(2+k^2)∂^4u̇_θ/∂θ^4+(1+2k^2)∂^2u̇_θ/∂θ^2=0 , and its general solution can be written as u̇_θ(θ)=a_1+b_1 θ+b_2 cosω_1θ +b_3 sinω_1θ +b_4 cosω_2θ+b_5 sinω_2θ , where a_1 and b_1–b_5 are integration constants and ω_1=√(1+k/2 (k+√(k^2-4))), ω_2=√(1+k/2 (k-√(k^2-4))) . Note that for a fixed value of ω_1=ω_1^0>√(3), eq. (<ref>)_1 has only a unique solution k^0 for k. Then -k^0 solves eq. (<ref>)_2 for ω_2=ω_1^0. Finally, ω_1=ω_2 when k^2=4. In this particular case, the solution of the differential equation becomes u̇_θ(θ)=a_1+b_1 θ+(b_2+b_3 θ)cos√(3)θ +(b_4 + b_5 θ)sin√(3)θ , (iii) For dead load, the governing equation is <cit.> ∂^6u̇_θ/∂θ^6+(2+k^2)∂^4u̇_θ/∂θ^4+(1+2k^2)∂^2u̇_θ/∂θ^2+k^2u̇_θ=0 . and its general solution can be written as u̇_θ(θ)=a_2 cosθ+a_3 sinθ+b_1 cos kθ+b_2 sin kθ+b_3 θcosθ+b_4 θsinθ , where a_2–a_3 and b_1–b_4 are integration constants. Note that in the particular case k=1, the solution becomes u̇_θ(θ)=a_2cosθ+a_3sinθ+(b_1 θ^2+b_2 θ)cosθ +(b_3 θ^2+b_4 θ)sinθ . §.§ Effect of the boundary conditions on the bifurcation Boundary conditions are to be imposed on solutions (<ref>), (<ref>), and (<ref>). §.§.§ The role of rigid-body roto-translations on the equilibrium of the circular rod As it has been so far presented, the circular rod is free in the plane and can suffer, in principle, a rigid-body roto-translation. This displacement is governed by the constants a_1, a_2, and a_3 in equations (<ref>), (<ref>), (<ref>) and can be represented as u̇_θ=a_1+a_2cosθ+a_3sinθ , u̇_r=a_2sinθ-a_3cosθ , where a_1 corresponds to a rigid-body rotation, while a_2 and a_3 rule the vertical and horizontal rigid-body translations, respectively. However, not all the rigid-body displacements are compatible with the applied radial loads (i.)–(iii.), so that in some cases, work is produced during the rigid-body displacements. (i.) For hydrostatic pressure all rigid-body displacements do not produce any work (for the undeformed, but also for an arbitrarily deformed, configuration of structure), so that the expressions (<ref>) trivially satisfy the governing equation (<ref>). Therefore, in the bifurcation problem, constants a_1, a_2, and a_3 remain arbitrary in equation (<ref>) and any (strictly necessary) external constraint system, which eliminates rigid body motions (for instance a clamp or three rollers), can be applied without changing the bifurcation loads and modes. (ii.) For centrally-directed radial load only the rigid-body rotation a_1 does not produce work, trivially satisfying equation (<ref>), and thus remains undetermined in the incremental problem, equation (<ref>). However, it will be shown below that rigid-body translations always produce negative work, so that the structure will not move, even without constraints. The latter condition is compatible with certain external constraints (for instance three axial rollers inclined at angles 0, π/2 and π). In this way, k^2=9/2 is obtained. If the external constraints are changed, for instance introducing a clamp, certain bifurcation modes are excluded and the bifurcation load increases at k^2 ≈ 6.47. When a rigid-body translation is applied, Fig. <ref>, the centrally-directed load performs a non-null work. In particular, a rigid-body translation of a finite amount a<R is postulated for the ring, aligned parallel to the horizontal axis x_1, so that the centre of the circular rod is displaced from O to O^'. After this displacement, the resultant d𝐟 of the radial force Π applied on an elementary arch of length ds=Rdθ is d𝐟=-Π(cosθ+a/R)𝐞_1+sinθ 𝐞_2/√(a^2/R^2+2a/Rcosθ+1)ds. The work W(a) done by the centrally-directed load during the application of the rigid-body translation of amount a is obtained through a double integration of the scalar product of equation (<ref>) with 𝐞_1 as W(a)=-Π R^2 ∫_0^a/R(∫_0^2πcosθ+α/√(α^2+2αcosθ+1) dθ) dα . Recalling that a < R, the sign of the work may be estimated by considering the bounds ∫_0^πcosθ/1+α dθ + ∫_π^2πcosθ/1-α dθ + 2πα/1+α≤∫_0^2πcosθ+α/√(α^2+2αcosθ+1) dθ , and ∫_0^πcosθ/1-α dθ + ∫_π^2πcosθ/1+α dθ + 2πα/1-α≥∫_0^2πcosθ+α/√(α^2+2αcosθ+1) dθ , so that eventually 0<2πα/1+α≤∫_0^2πcosθ+α/√(α^2+2αcosθ+1) dθ≤2πα/1-α , and therefore a/R+log(1-a/R) ≤W(a)/2π R^2Π≤ -a/R+log(1+a/R) <0. It follows from the bounds (<ref>) that the work is always negative for compressive radial forces. It can be concluded that for compressive (for tensile) centrally-directed radial load, Π>0 (Π<0), the ring is stable (is unstable) to rigid-body translations, so that experiments on the ring are possible for Π>0 even without external constraints. (iii.) For dead radial load only the rigid-body translations a_2 and a_3 do not produce work, trivially satisfying equation (<ref>), and therefore remain undetermined in the incremental problem, equation (<ref>). It will be shown below that any rigid-body rotation always produces positive work for compressive radial load, so that the structure will move and this movement has to be eliminated with a constraint. The latter condition has to leave unaffected the involved bifurcation mode, so that the first bifurcation mode is obtained with a clamp, k^2≈ 0.701, while three axial rollers determine k^2=4. When a finite rigid-body rotation α is applied to the annular rod, Fig. <ref>, every point of its axis (determined by the angle θ) suffers the finite displacement 𝐮 𝐮(θ, α)=-R(1-cosα) 𝐞_r(θ) + Rsinα 𝐞_θ(θ) . The resultant d𝐟 of the radial force Π applied on an elementary arch of length ds is d𝐟=-Π ds 𝐞_r , thus the work done by the whole dead radial load associated with the rotation α becomes -Π R∫_0^2π𝐞_r·𝐮(θ,α) dθ=2π R^2Π(1-cosα) . It follows from equation (<ref>) that the work is always positive for the compressive radial load (or null in the trivial case α=2π). It can be concluded that for compressive (tensile) dead radial load, Π>0 (Π<0), the ring is unstable (stable) to rigid-body rotations, in analogy to a rigid rod subject to two equal and opposite dead forces at its ends. §.§.§ Circular rod: fully continuous bifurcation modes Solutions (<ref>), (<ref>), and (<ref>) and their derivatives are continuous functions of θ∈ [0, 2π], so that continuity of the structural element is enforced by requiring that the function assumed the same value in 0 and in 2π. In this Section, solutions are sought that respect the continuity of the incremental kinematic descriptors u̇_θ, u̇_r, Φ̇ u̇_θ(0)=u̇_θ(2π), u̇_r(0)=u̇_r(2π), Φ̇(0)=Φ̇(2π), and of the incremental internal forces, Ṁ, Ṫ, and Ṅ Ṁ(0)=Ṁ(2π), Ṫ(0)=Ṫ(2π), Ṅ(0)=Ṅ(2π). Therefore, an application eqs. (<ref>) and (<ref>), shows that continuity equations (<ref>) and (<ref>) become equivalent to ∂^n u̇_θ/∂θ^n(0) = ∂^n u̇_θ/∂θ^n(2 π), n=0,...,5, where n=0,1,2 for the continuity of the kinematic descriptors and n=3,4,5 for the internal forces. The solutions (<ref>)–(<ref>) show that, when present, all coefficients a_1, a_2, and a_3 remain unaffected by the continuity conditions (<ref>), because they represent rigid-body motions, which a-priory satisfy the continuity of any order. Therefore, only a limited number of eqs. (<ref>) are to be used, in particular, six conditions minus the number of constants a_i. The conditions which are not imposed are automatically satisfied. (i) For hydrostatic pressure, equation (<ref>) shows that b_1=0 and that [ [ cos 2πω -1 sin 2πω; sin 2πω -cos 2πω +1; ]] [ [ b_2; b_3 ]]=0, so that non-trivial solutions may exist when sin^2 ωπ = 0,    ⟹   ω. When ω is an integer, all the items in the matrix (<ref>) vanish, so that the constants b_2 and b_3 remain undetermined. Therefore, at bifurcation, a_1, a_2, a_3, b_2, and b_3 are all left arbitrary by the conditions of continuity (<ref>). The bifurcation modes, eq. (<ref>), become u̇_θ(θ)=a_1+a_2 cosθ+a_3 sinθ+b_2 cosθω+b_3 sinθω . Note that ω=1 is a solution of equation (<ref>) leading to k=0, a trivial condition which has to be disregarded, because it corresponds to rigid-body displacements. Therefore, the smallest value of critical load can be obtained from equation (<ref>) as ω=2, leading to k^2=3. (ii) For centrally-directed load, equation (<ref>) shows that continuity requires b_1=0. In addition, the continuity of u̇_θ up to its fifth derivative leads to an eigenvalue problem becoming singular when one of two independent conditions similar to eq. (<ref>) are satisfied, one involving b_2 and b_3 and the other b_4 and b_5, these respectively are sin^2 ω_1 π = 0, or sin^2 ω_2 π = 0, leading to integer values of ω_1 and ω_2. The two conditions (<ref>) are equivalent, so that bifurcation can be reduced to the request that ω_1 be an integer and the bifurcation modes, eq. (<ref>), becomes u̇_θ(θ)=a_1+b_2 cosθω_1+b_3 sinθω_1 . Note that the solutions ω_1 =1 and ω_2=1 of equations (<ref>) are to be disregarded as they lead to k=0, corresponding to a trivial bifurcation characterized by a rigid-body rotation governed by the arbitrary coefficient a_1. Additionally, the case k^2=4 corresponds to ω_1=ω_2, thus the corresponding general solution is given by eqn. (<ref>), which is not compatible with the required continuity conditions (<ref>). The smallest value of critical load can be obtained from equation (<ref>_1) for ω_1=2, leading to k^2=9/2. (iii) For dead load, equation (<ref>) shows that b_3=b_4=0, while sin^2 k π = 0 , leading to integer values for k. Therefore, at bifurcation load b_3=b_4=0, while a_2, a_3, b_1, and b_2 remain unprescribed. The bifurcation modes, eq. (<ref>), become u̇_θ(θ)=a_2 cosθ+a_3 sinθ+b_1 cos kθ+b_2 sin kθ . Note that the solution k =1 of equation (<ref>) is to be disregarded, because eqn. (<ref>) does not admit continuous solutions. As a conclusion, the smallest value for the critical load can be obtained from equation (<ref>) as k^2=4. The first three bifurcation modes corresponding to the above fully-continuous' solutions are reported in Fig. <ref>, for all types of investigated loads. All the bifurcation modes shown in the figure are double, so that one is depicted as blue and the other red. It should also be noted that the first mode of bifurcation can be obtained without external constraints only in the cases of hydrostatic pressure and centrally-directed loaded. The first mode for the dead load cannot be realized without a strong external constraint system, as detailed in the next section. §.§ External constraints In the presence of external constraints, the solutions corresponding to fully continuous bifurcation's modes may no longer be valid. In fact, constraints introduce discontinuities; for instance, at a clamp, all the internal forces and moments may jump. When external constraints are present, the solutions (<ref>), (<ref>), and (<ref>) are valid only within the intervals of θ comprised between each pair of constraints, so that six integration constants are to be obtained for each interval, by imposing the relevant conditions. For instance, a pin enforces the displacement components to vanish for both connected intervals (four conditions), plus the continuity of rotation and bending moment (two conditions). In the following, the possibility of achieving a fully continuous bifurcation solution is scrutinized with a view to external constraints. §.§.§ (i) Hydrostatic pressure For hydrostatic pressure, the fully continuous solution (<ref>) contains all the rigid-body displacement components, constants a_1, a_2, and a_3. Therefore, any well-assigned system of external constraints, which is statically determinate, is compatible with all fully continuous bifurcation modes. For instance, three rollers, or two rollers and a pin, or a clamp, are all possible external constraints compatible with the attainment of all fully continuous bifurcation modes. In particular, the first mode becomes visible, while the attainment of higher-order modes requires the use of statically-indeterminate external constraints, selected in a proper way. However, the equilibrium neutrality of every possible deformed shape of the ring under pressure loading, implies that the first bifurcation load and mode can be obtained even in the absence of external constraints (for instance depressurizing a tube, Fig. 1 of <cit.>). §.§.§ (ii) Centrally-directed load When subject to centrally-directed load, the ring is in neutral equilibrium only under rigid-body rotations. Consequently, constraints restricting this movement, such as a movable clamp, do not affect bifurcation modes. However, this is not true for rigid-body translations, so that limiting these displacements influences the bifurcation loads and modes. It has been shown in Section <ref> that the equilibrium configuration of the circular rod is stable and, therefore, the first fully continuous mode of bifurcation can be realized even in the absence of external constraints. Generally, the bifurcation is sensitive to external constraints for centrally-directed load, even when these realize a statically-determined system. This is shown in Fig. <ref>, where different bifurcation modes are reported (critical values of k^2 are also included), corresponding to four constraint systems. From left to right, these are one clamp, a (vertically and horizontally) movable clamp plus a pin, a horizontal roller plus a pin, and a vertical roller plus a pin. The upper row of the figure reports the first bifurcation mode, while the second and third modes are sketched in the central and lower rows. The figure vividly shows that the lowest bifurcation load, k^2=9/2, reported in <cit.>, corresponds to the fully continuous bifurcation, which can be realized without external constraints, but also with a vertical roller and a pin. Changing the constraints varies the bifurcation loads, so that k^2 ≈ 6.769 is the first bifurcation mode for movable clamp plus pin, but corresponds to the second mode for clamp and for vertical roller plus pin. The loads k^2 ≈ 5.356 and k^2 ≈ 6.472 do not correspond to any higher bifurcation mode occurring for other constraint configurations. §.§.§ (iii) Dead load For the dead load, the fully continuous solution (<ref>) contains the two rigid-body displacement components, coefficients a_2 and a_3. The structure has to be externally constrained, because otherwise, the dead load would make the structure unstable to rigid-body rotations, Section <ref>. The bifurcation analysis becomes very sensitive to the specific system of external constraints. This is shown in Fig. <ref>, similar to Fig. <ref>, but with a further constraint system where four rollers are used (last column on the right). The four rollers define a statically undetermined situation, which is included now because in this way the first fully continuous bifurcation mode, k^2=4, can be realized. All the other constraint configurations lead to smaller bifurcation loads, initiating with that corresponding to a clamp or a movable clamp plus pin, k^2≈ 0.701 (the smallest bifurcation load pointed out in <cit.>) and continuing with a roller plus pin and three rollers k^2≈ 3.271. Note also that the first fully continuous mode corresponds to the second mode for all constraint systems, except the four rollers. As pointed out in <cit.>, the bifurcation load k^2=4, previously derived by several authors, remains meaningless without a specification of the external constraints applied to prevent rigid-body displacement and rotational instability. Hence, the value reported in <cit.> only refers to the continuous solution and can be obtained by imposing a strong external constraint, as is the case of the four rollers. The value k^2 ≈ 3.271 for roller plus pin constraint was obtained in <cit.> to correct the wrong values k^2≈ 3.265 provided in <cit.>. The fact that there is a bifurcation load k^2≈ 1.734, intermediate between k^2≈ 0.701 and k^2≈ 3.271, passed unnoticed in <cit.>. § EXPERIMENTAL SET-UP FOR CENTRALLY-DIRECTED LOAD To validate the theoretical results obtained for the bifurcation of a thin ring subject to centrally-directed load and to realize a new type of force distribution never attempted so far, an experimental setup was conceived, designed, realized, and tested in a collaboration between the Laboratory of Integrated Mechanics and Imaging for Testing and Simulations (LIMITS, University of Napoli) and the Instability Lab (University of Trento). A ring with radius 120.75 mm and rectangular (1.3 × 10.2 mm^2) cross-section, Fig. <ref> A, was manufactured through 3D printing additive technology (Stratsys Objet 30 Pro), by employing the thermoplastic material Acrylonitrile Styrene Acrylate (ASA), a set-up minimizing imperfections, so that possible out-of-roundness have been estimated (through a camera-aided procedure) to be smaller than 10^-4. The elastic stiffness of the material was preliminary measured by manufacturing a rod with a prescribed geometry, to be mechanically tested using the electromechanical machine TA Instruments ElectroForce (200 N 4 motor Planar Biaxial Test Bench) in a cantilever configuration. In particular, its Young's modulus, which resulted to be about 2500 MPa, was determined under bending produced by imposing a dead loading at the free end. The Young modulus was found in agreement with the value declared in the technical datasheet of the material that feeds the 3D printer (see Fig. <ref>). With another use of additive manufacturing, combined with CAD-based geometry design, components were realized to produce the experimental set-up illustrated in Fig. <ref>, which was stabilized by locking it inside a hole made in the central part of a wooden table. To reduce friction effects at the interface between the elastic ring and its support during the experiments, an ultra-high-molecular-weight polyethylene (UHMHPE) surface was mounted on the table. The centrally-directed load was reproduced by attaching 12 equally spaced cables to the ring. The number of cables used in the experimental setup was selected based on the results obtained by Albano and Seide for both cases of normal <cit.> and centrally directed <cit.> concentrated forces, distributed symmetrically along an initially circular rod. They considered the distortion of the configuration due to the discreteness of the loads and analyzed the bifurcation from that state. They showed that, when the loads are at least 5, the average radial load for bifurcation does not differ substantially from that corresponding to the application of a uniform radial load, which leaves the initial configuration undistorted. In particular, for centrally directed radial forces, 12 equally spaced concentrated loads yield a buckling coefficient k^2=4.505, almost coincident with the value k^2=9/2 corresponding to the radial uniform load. The simultaneous application of multiple forces, all of equal intensity, was obtained by designing the device shown in Fig. <ref>, where a periodic arrangement of 12 pulleys (introduced to minimize friction) allows to convey forces towards the centre of the ring and then downwards through radially-oriented nylon fishing cables (ϕ=0.6 mm, F_max= 260 N). The setup ensures that the cables connected to the ring and the pulleys are all lying on the same horizontal plane. The centering of the ring and cables was checked with a camera-aided procedure. All parts, including cables, were lubricated with a lithium grease to reduce friction. The symmetrical distribution of the load among the 12 cables was obtained by pouring water through the central hole at the top of the system, from which the water is channelled and brought to 12 independent buckets, through 12 rubber tubes, progressively filling the tanks. The geometry of each bucket was sized to initiate tests with a prescribed pre-load still below the instability of the ring (by locating iron weights inside the buckets in a specifically designed housing), then allowing to fill these cylindrical containers up to 40 gr of water. As illustrated in Fig. <ref>, the loading process was executed by controlling the amount of water poured with a graduated dosing glass into the buckets. Experiments were recorded during their whole duration, by positioning a camera on the top to follow the different deformation stages of the ring as the applied weight increased, until the the first buckling occurred and the post-buckling initiated. Two situations were investigated, one in which the ring is left free from external constraints (k^2=9/2, bifurcation mode shown in Fig. <ref>, central part on the left) and the other in which the ring has been constrained with an external clamp (k^2 ≈ 6.472, bifurcation mode shown in Fig. <ref>, upper part on the left). Therefore, two rings with nominally the same characteristics were manufactured and connectors with lobster clasps for each cable were used to reduce manual operations. Adopting the set of material and geometrical parameters reported in panel A of Fig. <ref>, from equation (<ref>), the expected value of buckling radial load is Π_c, r≈ 0.0119 N/mm, corresponding to k^2=9/2. Data reported in Fig. <ref> (B.1) show that the experiment started from an initial radial load 0.085 N/mm (k^2=3.2054), while bifurcation was found at 0.012 N/mm (k^2=4.5253), and the post-critical behaviour was clearly visible at 0.013 N/mm (k^2=4.9024), where the right panel is in fact representative of the progression of the buckling shape. The experimental results, in terms of both buckling mode (a simple ovalization) and force-equivalent critical radial load (k^2=4.5253 instead of k^2=4.5), show an excellent agreement with the theoretical predictions, as highlighted by the values reported in Fig. <ref>. The experimental results confirm that the bifurcation for centrally-directed load, k^2=9/2, occurs at a remarkably greater intensity than that for hydrostatic pressure, k^2=3, to which a value Π=0.0079 N/mm for the radial load would correspond. Confirmation of theoretical outcomes in comparison with experimental findings, both in terms of critical pressure and (first) deformation mode, were also obtained in the case of the clamped ring, as illustrated in Fig. <ref> (B.2). From equation (<ref>), the expected value of buckling radial load for the ring clamped at a point is Π_cr≈ 0.017 N/mm, corresponding to k^2≈ 6.472. For the clamped ring, the experiment started from an initial radial load of 0.015 N/mm (k^2=5.6567), while the bifurcation was found at 0.017 N/mm (k^2=6.4109), and the post-critical behaviour was visible at 0.019 N/mm (k^2=7.1651), the right image in <ref> (B.2) showing the progression of the ring buckling shape for the case at hand. The deformed shapes exhibited by the ring at critical and post-critical loads can be compared with the undeformed shape highlighted by the green dotted circles reported in Fig. <ref>. § CONCLUSIONS The bifurcation problem of a circular Euler-Bernoulli rod subject to a uniform radial load is highly sensitive not only to how the load responds to the buckling deformation but, except for the hydrostatic pressure, also to the applied external constraints, when these define a statically-determined system. Different constraints can, in fact, change the critical load by an order of magnitude for centrally-directed and dead loads. This evidence reconciles previous apparently contradictory statements. A new experimental setup demonstrates the feasibility of applying a centrally-directed load to an annular rod. The experiments not only confirm the theoretical predictions but also motivate a new strategy for the design of cable-guided deformable structures. § ACKNOWLEDGEMENTS The present article is dedicated to Professor Giuseppe, Peppe', Saccomandi who delighted us in several years of sincere friendship, with his enthusiasm, passion for science, and willingness to share ideas in the field of mechanics and beyond. All the authors acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, grant agreement no ERC-ADG-2021-101052956-BEYOND. A.C. has also been supported by the Project of National Relevance PRIN2022 grant no P2022XLBLRX and PRIN2022PNRR grant no P2022MXCJ2, funded by the Italian MUR. M.F. additionally thank financial support from MUR through the projects FIT4MEDROB, PNC0000007 (ID 62053) and AMPHYBIA (PRIN-2022ATZCJN). ieeetr albano1973bifurcation,oran1969buckling
http://arxiv.org/abs/2407.02212v1
20240702122146
Vortex Rings in Event-by-Event Relativistic Heavy-Ion Collisions
[ "David Dobrigkeit Chinellato", "Michael Annan Lisa", "Willian Matioli Serenone", "Chun Shen", "Jun Takahashi", "Giorgio Torrieri" ]
nucl-th
[ "nucl-th", "hep-ph", "nucl-ex" ]
Instituto de Fisica Gleb Wataghin, Universidade Estadual de Campinas, Campinas, Brasil Department of Physics, The Ohio State University, Columbus, Ohio, USA Instituto de Fisica Gleb Wataghin, Universidade Estadual de Campinas, Campinas, Brasil chunshen@wayne.edu Department of Physics and Astronomy, Wayne State University, Detroit, Michigan, 48201, USA RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA Instituto de Fisica Gleb Wataghin, Universidade Estadual de Campinas, Campinas, Brasil Instituto de Fisica Gleb Wataghin, Universidade Estadual de Campinas, Campinas, Brasil § ABSTRACT We present event-by-event simulations for central asymmetric light+heavy and Au+Au collisions to investigate the formation and evolution of vortex-ring structures in the longitudinal flow velocity profile. The production-plane polarization of Λ hyperons, defined w.r.t. the Λ momentum and the beam, can track the “vortex-ring” feature in the event, a characteristic vortical structure generated by longitudinal flow gradients. We make comprehensive model predictions for the rapidity-dependent vortex-ring observables for different collision system sizes at = 200 and 72 GeV. Our predictions at the latter energy can be explored in the future LHCb fixed-target experiment at the Large Hadron Collider. Vortex Rings in Event-by-Event Relativistic Heavy-Ion Collisions Giorgio Torrieri July 8, 2024 ================================================================ § INTRODUCTION One of the most common manifestations of hydrodynamics in everyday physics is the appearance of vortical structures originating from gradients <cit.>. Most people are familiar with “smoke rings”, in both air and water, generated by localized fast-moving currents embedded in a larger medium. Considering the seeming explanatory power of hydrodynamics in heavy ion collisions – including small-on-large systems such as p+Au collisions –, it is interesting to look for vortex ring phenomena in such systems. The appearance of collective phenomena in proton+nucleus collisions <cit.> makes these systems ideal laboratories for the search for such a phenomenon. The gradients produced by a “bullet” passing through a larger medium mimic the conditions associated with vortex ring formation. We have argued in Ref. <cit.> that the production-plane polarization of Λ hyperons can be used to measure the vortex-ring-like structure in the flow profile of produced deconfined matter (an analogous problem with jets was examined in <cit.>). We initialized a three-dimensional viscous hydrodynamic simulation with a completely central collision (with impact parameter b = 0 fm) between a smooth proton and a smooth gold nucleus. This work extends our simulations to more realistic conditions, including lumpy initial conditions and impact parameter fluctuations in event-by-event hydrodynamics. These simulations produce similar flow profiles; the vortex ring formation and observable are robust. We present systematic studies on this vortex-ring observable for different light+heavy asymmetric collisions at √(s_ NN)=200 and 72 GeV. The system size scan allows us to study how the vortex ring observable approaches the limit of axially symmetric collision systems, such as Au+Au collisions. Our model predictions at = 72 GeV would motivate future measurements in the LHCb fixed-target experiments with the System for Measuring Overlap with Gas (SMOG) setup. The simulations at low collision energy also allow a quantitative comparison between our collective-dominated model and the earlier experimental results interpreted via QCD spin-orbit couplings <cit.>. Experimentally, the vortex ring structure can be quantified as <cit.>, ℛ_Λ^ẑ≡ 2 ⟨S⃗_Λ·(ẑ×p⃗_Λ)/|ẑ×p⃗_Λ|⟩_ϕ_Λ, where ẑ≡ (0, 0, 1) points in the direction of the light-ion beam, and the average is taken over the Λ momentum azimuthal angle ϕ_Λ about the beam. The vectors S⃗_Λ and p⃗_Λ are the spin and momentum vectors for the Λ hyperons in the lab frame, respectively. The Λ's spin polarization can be computed using the following the Cooper-Frye like formula from a vortical medium <cit.>, S^μ(p)= - 1/8mϵ^μρστω_σρ p_τ, where ω_σρ≡ -1/2 [∂_σ (u_ρ/T) - ∂_ρ (u_σ/T)] is the thermal vorticity tensor. The ... operator is defined as, X≡∫ dΣ_λ p^λ n_F (1 -n_F) X/∫ dΣ_λ p^λ n_F, where n_F is the Fermi-Dirac distribution, and dΣ_μ is the normal vector on the hydrodynamic freeze-out hypersurface. Equation (<ref>) builds the connection between the vortex ring observable ℛ_Λ^ẑ with the production plane fluid vorticity density at freeze-out ℛ_fluid^ẑ≡ϵ^μνρσΩ_μ n_νẑ_ρ u_σ/|ϵ^μνρσn_νẑ_ρ u_σ|, where u_σ is the fluid velocity and Ω^μ≡ϵ^μαβγω_αβ u_γ is the vorticity vector orthogonal to u^μ. The unit vectors ẑ^ρ = (0, 0, 0, 1) and n^μ points to the normal direction of the freeze-out surface, n^μ≡ dΣ^μ/|dΣ^μ|. This paper will be laid out as follows. Section <ref> will provide a concrete description of the (3+1)D dynamical model and computations of the Λ's polarization vector. In Section <ref>, we will discuss the ring observables in detail. We will conclude with some closing remarks in Sec. <ref>. In this paper, we use the conventions for the metric tensor g^μν = diag(1, -1, -1, -1) and the Levi-Civita symbol ϵ^0123 = 1. § THE MODEL FRAMEWORK This work employs the geometric-based 3D initial conditions developed in Refs. <cit.> connecting with a hydrodynamics + hadronic transport model to carry out event-by-event simulations. The transverse nuclear thickness functions for the two incoming nuclei are computed using T_A(B)(x⃗_⊥) = ∑_i 1/2πω^2exp(- |x⃗_⊥ - x⃗_⊥, i |^2 /2ω^2). Here, the summation goes over all participant nucleons in the colliding nucleus. We assume a 2D Gaussian profile for each nucleon in the transverse plane with a width ω as a model parameter[The hot spot transverse size w should not be confused with ω_μν and its modulus]. We follow the 3D initial model <cit.> to map the event-by-event nuclear thickness functions to the collision systems' initial energy-momentum tensor T^μν at the starting time of hydrodynamic simulation τ = τ_0 = 1 fm/c. This initial-state model ensures the system's orbital angular momentum is conserved when mapping the energies and momenta of colliding particles to hydrodynamic fields. We assume the initial energy-momentum current takes the following form, T^ττ = e(x⃗_⊥, η_s) cosh(y_L(x⃗_⊥)), T^τη = 1/τ_0 e(x⃗_⊥, η_s) sinh(y_L(x⃗_⊥)). We assume there is no transverse flow at τ = τ_0, T^τ x = T^τ y = 0. The system's longitudinal flow can be parameterized as y_L(x⃗_⊥) = f y_CM(x⃗_⊥), where the parameter f ∈ [0, 1] controls how much of the initial net longitudinal momentum is attributed to the flow velocity. The f = 0 case recovers the well-known Bjorken flow profile, y_L = 0 in the Milne coordinates. The center of mass rapidity y_CM is determined by the nuclear thickness functions in every transverse position, y_CM(x⃗_⊥) = arctanh[T_A - T_B/T_A + T_Btanh(y_beam) ]. The local energy density in Eqs. (<ref>) and (<ref>) is parametrized as e (x⃗_⊥, η_s, y_CM,f) = 𝒩_e(x⃗_⊥) exp[- (|η_s - (y_CM(1 - f)) | - η_0)^2/2σ_η^2 ×θ(|η_s - (y_CM (1- f)) | - η_0)], where η_s is the spacetime rapidity and f parametrizes the radial gradient of the longitudinal flow (see <cit.> for a discussion). As shown, f=0 implies everything depends on η-y_CM. The normalization factor 𝒩_e(x⃗_⊥) is fixed by the total incoming collision energy, and it scales with √(T_A(x⃗_⊥) T_B(x⃗_⊥)) at high energies <cit.>. The values of the model parameters are listed in Table <ref>. The initial baryon density distribution is set up as in Ref. <cit.>. The initial energy-momentum tensor is propagated hydrodynamically with lattice QCD EoS <cit.> using the numerical code <cit.>, based on the Denicol-Niemi-Molnar-Rischke (DNMR) hydrodynamic model <cit.>. This work only considers shear viscous effects and uses a constant QGP specific shear viscosity η T/(e + P) = 0.08. Particlization is performed on a constant energy density hyper-surface with e_ sw = 0.5 GeV/fm^3 identified by the algorithm <cit.>. Individual hyper-surface cells contain all the ingredients needed to calculate the ring polarization observable according to Eq. (<ref>). We will explore the dependence on model parameters in Sec. <ref>. The numerical simulations are carried out using the framework. Since the ring observable in Eq. (<ref>) is driven by longitudinal velocity and density gradients, both the hotspot transverse size w and f are expected to influence it. As already discussed in <cit.>, the vortical structure is sensitively related to f because deviation from the limit of absolute transparency in the Bjorken picture also quenches longitudinal vorticity. However, longitudinal vorticity is also quantitatively driven by gradients sensitive to the transverse in-homogeneity scale, which, in the Glauber model, is determined by the nuclear size <cit.>. § RESULTS §.§ Vortex rings at the top RHIC energy First, we will calibrate our model to some experimental measurements that characterize the global properties of the collision systems. The charged hadron dN/dη is shown in Fig. <ref> for different collision systems at = 200 GeV. Our calculations give a qualitative agreement to the PHENIX and PHOBOS measurements <cit.>. However, quantitatively, the asymmetry is smaller in this initial-state model than in the PHENIX measurements for small systems. We note that the global longitudinal multiplicity is dependent weakly on model parameter f, which controls the magnitude of the initial longitudinal flow. Figure <ref> shows the observable (defined in Eq. (<ref>)) as a function of Λ's pseudo-rapidity for several collision systems at = 200 GeV. Comparing panels (a) and (b), we find that the observable shows a strong sensitivity to the amount of initial-state longitudinal flow used in the model. The sensitivity increases as the collision systems become more and more asymmetric. The values of are about a factor of 10 different in the two sets of simulations for central p+Au collisions. Therefore, a measurement of can serve as a direct probe for the system's initial-state longitudinal flow in asymmetric collisions. A collision system scan of this observable, as shown in Fig. <ref>, would be a valuable tool to reveal the early-stage stopping dynamics in relativistic heavy-ion collisions. With a strong initial-state longitudinal flow (f = 1), the (η) imprints the early-stage flow vorticity pattern from the initial collision configurations. The approximately constant within |η| < 3 shows the light ion drills through the heavy nucleus. For the scenario with f = 0, the collision systems develop fluid vorticity from zero by the local pressure gradients. Such hydrodynamic response to the geometry develops fluid vorticity slowly, resulting in a smaller effect on the Λ's polarization. The rapidity-odd (η) in Au+Au collisions is a signature of the fireball's transverse expansion <cit.>. As the collision systems become more asymmetric, the values of stay positive within |η| < 3 for both sets of simulations (f = 0, 1). These results suggest that the longitudinal expansion away from the Bjorken flow is stronger than the fireball's transverse expansion in small systems, such as p+Au and ^3He+Au collisions. Now, we would like to understand the transverse momentum p_T dependence of the observable. Based on Eqs. (<ref>) and (<ref>), we can show that (p_T) ∝1/m(E ω^z σ - p^z ω^t σ)p_σ/p_T + m/p_Tω^tz ∝1/m (E ω^z i_T - p^z ω^t i_T)p_i_T/p_T - p_T/mω^tz, where the index i_T sum over the transverse coordinates i_T = x, y. With a symmetric rapidity acceptance, the term proportional to p^z should vanish. Then (p_T) ∝E/m(p̂^x ω^xz + p̂^y ω^yz) - p_T/mω^tz, where the unit 2D vector p̂^i_T≡ p^i_T/p_T with p_T = √(p_x^2 + p_y^2). Eq. (<ref>) shows that (p_T) scales linearly with p_T at large transverse momenta, which is observed in our calculations, shown in Fig. <ref>. In the limit of p_T → 0, (p_T) →(p̂^x ω^xz + p̂^y ω^yz). Our calculations show (p_T) → 0 after the · average defined in Eq. (<ref>). For small systems like p+Au and ^3He+Au collisions, the two sets of simulations (f=0 and f=1) give opposite slopes of (p_T) with p_T, which indicate the competition between the first and second terms in Eq. (<ref>). The first term in Eq. (<ref>) is related to the vortex ring pattern, which is the reason we see a positive slope of (p_T) at high p_T for the strong initial longitudinal flow case (with f = 1). For the other limit f = 0, the second term in Eq. (<ref>) dominates, which changes the slope of (p_T) to negative. Therefore, the measurement of (p_T) can provide information about the relative size between different components of the thermal vorticity tensor in the collision system. Figure <ref> further shows the (p_T) in three different rapidity windows for p+Au collisions. In the strong initial longitudinal flow case f = 1, the vortex ring pattern for the thermal vorticity stays roughly constant across rapidity. This flow pattern results in the almost rapidity independent (p_T) in panel (a). For the scenario with no initial longitudinal flow (f = 0), the rapidity dependence of (p_T) is much more complex. Our result reinforces the production plane polarization and the smoke ring variable as probes of longitudinal transparency and collective behaviors in small systems. §.§ Vortex rings in the LHCb fixed-target SMOG experiment Now, we extend our simulations to asymmetric collision systems that are accessible in the upcoming LHCb fixed-target experiment with the System for Measuring Overlap with Gas (SMOG) setup. Figure <ref> shows the charged hadron pseudo-rapidity distributions for 0-5% central p+Pb and ^4He+Pb collisions at = 72 GeV. Our results show the same dependence as those at 200 GeV in Fig. <ref>. Figure <ref> shows the pseudo-rapidity dependence of the (η) in central p+Pb and ^4He+Pb collisions at = 72 GeV for two initial longitudinal flow scenarios. With strong initial longitudinal flow (f = 1), the small system results at = 72 GeV agree qualitatively with the those at 200 GeV in Fig. <ref>a. The vortex ring structure extends over several units in rapidity in the Pb-going direction. In the no initial longitudinal flow case (f = 0), Fig. <ref>b shows that the maximum (η) in central p+Pb and ^4He+Pb collisions at = 72 GeV could be about twice of the values in the similar small systems at 200 GeV, reflecting more non-trivial longitudinal dynamics in collision systems at the lower energy. The LHCb SMOG experiment and the STAR forward spectrometer could perform such measurements at RHIC and LHC energies, as lower-energy fixed target runs would make the energy scan of this observable accessible. Figures <ref> and <ref> further show the p_T-differential (p_T) for small collision systems at the LHCb fixed-target collision energy. The results show a similar p_T dependence at 200 GeV discussed above. Overall, the potential measurements as functions of rapidity and transverse momentum at the LHCb fixed-target experiment would be a new observable for elucidating the collective nature of the produced collision systems. §.§ Model dependence of To systematically explore the new observables, we introduce variations in the model parameters and identify how the proposed observables depend on these aspects. Figures <ref> and <ref> show how the (η) and (p_T) vary with different model parameters in the simulations. We find that has a strong sensitivity on the initial hotspot's transverse size w. Small hotspots result in large temperature gradients in the early stages of the collisions and contribute to the thermal vorticity in the system. Although both a small hotspot size w and a large f in the model would result in large (η), the p_T-differential dependence in Fig. <ref> can disentangle the two model parameter. Our prediction suggests a negative slope of (p_T) for no initial longitudinal flow case (f = 0). Panels b, c, d in Figs. <ref> and <ref> show the model variations on the starting time of hydrodynamics τ_0, QGP specific shear viscosity η/s, and freeze-out energy density e_sw. We find that weakly depends on τ_0 and η/s in 0-5% p+Au collisions at 200 GeV. The value of is smaller with a lower freeze-out energy density e_sw. A lower freeze-out energy density allows a longer fireball lifetime and more time for the system's thermal vorticity to relax to small values. The high p_T slope of (p_T) shows a strong sensitivity to the value of e_sw used in the simulations. §.§ Additional gradient-induced polarization In addition to the thermal vorticity, the Λ hyperons' polarization also receives contribution from the thermal shear tensor and gradients of μ_B/T <cit.>, S_SIP(type I)^μ(p) = - 1/4mϵ^μρστ1/p · ut̂_ρξ_σλ p^λ p_τ, S_SIP(type II)^μ(p) = - 1/4mϵ^μρστ1/p · u u_ρξ_σλ p^λ_⊥ p_τ, S_μ_BIP^μ(p) = - 1/4mϵ^μρστT/p · u u_ρ∂_σ(μ_B/T) p_τ. Here, the thermal shear tensor ξ^σλ≡1/2[∂^σ(u^λ/T) + ∂^λ(u^σ/T)]. In the type I shear-induced polarization (SIP), the unit vector t̂^ρ = (1, 0, 0, 0) <cit.>. In the type II SIP, p^λ_⊥ = p^λ - u^λ (p · u) is the momentum vector transverse to the flow velocity <cit.>. Using Eq. (<ref>), we can obtain their contribution to the (p) observables as follows, (SIP(type I)) ∝p_λ/m (p · u) [- ξ^z λ p_T - ξ^i_T λp̂_i_T p^z ] (SIP(type II))∝ ⟨p_λ, ⊥/m (p · u){- ξ^z λ [p^t (u^i_Tp̂_i_T) + u^t p_T] . - ξ^i_T λp̂_i_T (u^t p^z - u^z p^t) + ξ^t λ [(u^i_Tp̂_i_T) p^z + u^z p_T] }⟩ (μ_BIP) ∝ ⟨T/m (p · u){ -∂^z (μ_B/T) [p^t (u^i_Tp̂_i_T) + u^t p_T] - p̂_i_T∂^i_T(μ_B/T) (u^t p^z - p^t u^z) + ∂^t (μ_B/T) [(u^i_Tp̂_i_T) p^z + u^z p_T] }⟩ Here, the index i_T runs over the transverse coordinates, i_T = x, y and p̂^i_T = p^i_T/p_T is the unit 2D vector for particle's transverse momentum. Figure <ref> shows the numerical results of these gradient-induced polarization contributions to the ring observable . We observe that the shear-induced polarization (type I) has a large contribution to , especially at large rapidity regions. While the other SIP formulation (type II) gives small corrections to the on top of that from the thermal vorticity. Our result indicates a substantial theoretical uncertainty from the SIP contribution to the observable vortex ring. This result demonstrates that measurements of the vortex ring observable can differentiate the form of symmetric shear contribution to Λ's polarization. Figure <ref>b shows that the contributions from shear-induced polarization to the observable grow with Λ's transverse momentum. Eqs. (<ref>) and (<ref>) indicate that the SIP contribution scales linearly with p_T, which is seen in the numerical results in Fig. <ref>b. Finally, we find that the contribution from the gradients of μ_B/T in Eq. (<ref>) do not play a significant role in small systems at 200 GeV. § CONCLUSIONS In conclusion, we have confirmed and extended the studies of the production plane polarization observable in asymmetric collisions, first discussed in <cit.>. We included hot spot and impact parameter fluctuations event-by-event and expanded our scope to study its system size dependence. The pseudo-rapidity dependence of the (η) observable can serve as a sensitive probe for the initial longitudinal flow velocity. We predict a linear p_T dependence of (p_T) for p_T > 1 GeV at mid-rapidity. Its slope can provide information about the relative sizes of individual components of the thermal vorticity tensor. We further provide model predictions for asymmetric collision systems in the LHCb SMOG experiment setup. These asymmetric collisions at = 72 GeV reveal more non-trivial longitudinal dynamics. We systematically explored the sensitivity of the observable on various model parameters. The proposed observable is a promising probe of hydrodynamic behavior in small asymmetric systems. Quantitative comparisons will set valuable constraints on the longitudinal dynamics and early-stage stopping mechanism. We look forward to experimental investigations in this direction. § ACKNOWLEDGMENTS This work is in part supported by the U.S. Department of Energy (DOE) under award numbers DE-SC0021969 and DE-SC0020651. C.S. acknowledges a DOE Office of Science Early Career Award. M.L. acknowledges the support of the Fulbright Commission of Brazil. J.T. was supported by FAPESP projects 2017/05685-2 and CNPq through 309174/2020-1. G.T. acknowledges support from Bolsa de produtividade CNPQ 305731/2023-8, Bolsa de pesquisa FAPESP 2023/06278-2. This research was done using resources provided by the Open Science Grid (OSG) <cit.>, which is supported by the National Science Foundation award #2030508 and #1836650.
http://arxiv.org/abs/2407.03011v1
20240703111147
Backward DVCS on the pion in Sullivan processes
[ "Abigail Rodrigues Castro", "Cedric Mezrag", "Jose M. Morgado Chávez", "Bernard Pire" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-th", "nucl-ex", "nucl-th" ]
arrows,shapes shapes.multipart comment CommBlockCM[1][□] comment #1  Comment ♯ by CM:  CommBlockJM[1][□] comment #1  Comment ♯ by JM:  CommBlockAC[1][□] comment #1  Comment ♯ by AC:  CommBlockBP[1][□] comment #1  Comment ♯ by BP:  [color=]CM [color=]JM [color=]BP [color=]AC [a]Abigail Castro a]Cedric Mezrag a]Jose M. Morgado Chávez b]Bernard Pire [a] IRFU, CEA, Université Paris-Saclay, 91191 Gif Sur Yvette, France [b]CPHT, CNRS, Ecole polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau, France abigail.rodriguescastro@cea.fr cedric.mezrag@cea.fr jose-manuel.morgadochavez@cea.fr bernard.pire@polytechnique.edu The purpose of this work is to do a systematic feasibility study of measuring in backward region deeply virtual Compton scattering on the pion in Sullivan processes in the framework of collinear QCD factorization where pion to photon transition distribution amplitudes (TDAs) describe the photon content of the π meson. Our approach employs TDAs based on the overlap of light front wave functions, using a previously developed pion light-front wave function and deriving a consistent model for the light front wave functions of the photon. This work is expected to lead us to an estimate of the cross-sections that could be measured in the future U.S. and China's electron-ion colliders. It will also provide a comparison with the forward Sullivan DVCS case, which gives access to pion GPDs and for which a strong signal is expected. 31st International Workshop on Deep Inelastic Scattering (DIS2024) 8–12 April 2024 Grenoble, France Backward DVCS on the pion in Sullivan processes [ July 8, 2024 =============================================== § FORWARD AND BACKWARD DVCS IN A SULLIVAN PROCESS Hard exclusive reactions are the golden way to perform quark and gluon tomography of hadrons. The tomography of mesons is a difficult task since there is no meson target. To circumvent this difficulty, the Sullivan processes <cit.> consider quasi-real π-mesons emitted by a nucleon target. Near forward deeply virtual Compton scattering in a Sullivan process (see Fig. <ref> – left panel) has indeed been proposed <cit.> to extract π -meson leading twist generalized parton distributions (GPDs) and feasibility studies performed <cit.>. Backward processes have recently been the subject of a renewed interest <cit.>, in particular in the context of a factorized description of their amplitudes in terms of transition distribution amplitudes <cit.> which generalize the notion of GPDs. We thus consider the reactions e(l) + p(p) → e(l') + γ(q') + π^+(p'_π) + n(p') , in the near backward region where -u_π = -(q-p'_π)^2 is small, with q = l-l' and p_π = p-p', the virtual photon and π meson momenta respectively, we define the energy fractions x_B = Q^2/2(p-p')· q,  ξ=x_B/2-x_B . In this context, we identify two contributions to the amplitude of reaction Eq. (<ref>): a strong process, backward deeply virtual Compton scattering (bDVCS, Fig. <ref> – central panel); and a purely electromagnetic one, the Bethe-Heitler process (see right pannel of Fig.<ref>). The latter has a negligible amplitude at small values of -u_π and we can ignore it in our analysis. We thus focus, solely, on the bDVCS contribution. § BACKWARD DVCS AMPLITUDE From now on, we assume that the pion source part of the Sullivan process can be factorised (see <cit.>) and we focus on the bDVCS part of the diagram. The proof of factorization of the bDVCS amplitude as a convolution of a short distance coefficient function (C_F), a meson distribution amplitude (Φ^π) and a π→γ TDA (A^πγ) follows the line of the factorization proof of meson forward deep electroproduction <cit.> on a nucleon, with the nucleon GPD replaced by the TDA. The leading twist QCD amplitude 𝒜_L for the process γ^*_L π^+→π^+γ thus reads 𝒜_L^π^+ (ξ,u,Q^2) = 16πα_s e/9Q∫ dx dz C^ud_F(x,z,ξ) Φ^π^+(z)A^π^+γ (x, ξ,u) , where C_F reads at leading order <cit.> C^qq'_F(x,z,ξ) = 1/1-ze_q/ξ-x-iε - 1/ze_q'/ξ+x-iε , q ≠ q'. Since the pion DA is symmetric in z → (1-z), the z-integration factorizes in a prefactor ∫ dz Φ^π(z)/z. § MODEL FOR THE PI-TO-GAMMA TDAS There are four leading twist π→γ TDAs: one vector, one axial and two transversity. In our process, only the axial quark TDA A^π contributes. It is defined as e/f_πϵ·Δ A^π^+ γ =1/2∫dz^-/2πe^ixP^+z^-.⟨γ,P+Δ/2|ψ_q'(-z/2)γ^+γ_5ψ_q(z/2)|π^+,P-Δ/2⟩|_z^+=z^⊥_i=0 . where ϵ is the outgoing photon polarisation vector and f_π the pion decay constant. Few models for the π→γ TDAs already exist <cit.>. The starting point for ours is the lowest Fock state description of a π^+ meson wave function : |π^+,↑↓⟩ = ∫dk_⊥/16π^3x/√(x(1-x))ψ_↑↓[b_u,↑^†(x,k_⊥) d_d,↓^†(1-x,-k_⊥). -. b_u,↓^†(x,k_⊥) d_d,↑^†(1-x,-k_⊥) ] |0 ⟩ with the Light-Front wave functions <cit.> (LFWF) given as: ψ^π_↑↓(x,k_⊥) = 8 √(15)πM^3/(k_⊥^2 + M^2)^2 x(1-x), where M is a mass scale fitted to M=318 MeV. The pion presents a second independent LFWF, ψ^π_↑↑ associated with the Fock state |π^+,↑↑⟩, whose computation to the contribution to the TDA is ongoing. For the photon case, the Fock state decomposition was employed as presented in <cit.> to obtain the photon states, and the Light-Front wave functions were derived based on the methodologies outlined in <cit.> (see <cit.> for an alternative discussion). In such a two-body approach, the TDA can be further decomposed into flavour contributions, labelling the quark flavour involved in the formation of the outgoing photon, A^π^+ γ = e (e_u A^π^+ γ_u+e_d A^π^+ γ_d) . Using the overlap method developed for GPDs <cit.>, we obtain the TDA in terms of these LFWFs[Applying this method and using our photon LFWF allows us to recover the anomalous GPD of the photon <cit.>.] in the DGLAP region x≥|ξ| in closed form: .A^π^+γ_q(x,ξ,t)|_x≥|ξ|=.A^π^+γ_q(↑↓)(x,ξ,t)|_x≥|ξ|+.A^π^+γ_q(↑↑)(x,ξ,t)|_x≥|ξ| , where (↑↓) and (↑↑) labels the quark helicity projections, and thus the different LFWFs contributions to the TDA. As an illustration, consider the contribution to the π→γ TDA generated by the dd Fock-space-expansion of the photon state (i.e. the u quark of the π^+ enters the hard kernel): .A^π^+γ_d(↑↓)(x,ξ,u)|_x≥ |ξ|=𝒩_↑↓(1-x)^2(x+ξ)/(1-ξ^2)^2(1+ξ)[(ξ-x)+(1-x)]τ(2τ+1)-√(τ/τ+1)tanh^-1(√(τ/τ+1))/τ^2(1+τ). where τ= -(1-x)^2/(1-ξ^2)u/4M^2. According to the covariant extension strategy <cit.>, the knowledge of TDAs within the DGLAP region uniquely[Importantly, TDAs being flavor non-singlet objects, no D-term–like ambiguity arises, in contrast to the GPD case.] specifies their ERBL domain. In a nutshell: as GPDs, π→γ TDAs benefit from a representation as the Radon transform of double distributions. Provided that the solution to the inverse Radon transform problem exists and is unique when TDAs are known only on the DGLAP region <cit.>, the associated double distribution can be found and employed afterwards to reconstruct the ERBL domain <cit.>. For the case above, Eq. (<ref>), in the u→ 0, the double distribution, h, is found to be a polynomial in the kinematic variables (β,α) h^π^+γ_d(↑↓)(β,α,0) = -𝒩_↑↓[1/3-10/3α-α^2+4α^3-10/3β+6αβ+4α^2β+7β^2-4αβ^2-4β^3], which yields [ .A^π^+γ_d(↑↓)(x,ξ,0)|_x≤|ξ| = 𝒩_↑↓/3ξ^4(1+ξ)^3[x^2ξ^2(5+ξ(20+3ξ))-x^4(3+ξ(10+11ξ)).; ; .-2ξ^4(1+ξ)-xξ^3(1-ξ(8+5ξ))+x^3ξ(1-13ξ^2)].; ] In the general case where u≠ 0, we explore the numerical procedure for the solution of the inverse Radon transform problem described in <cit.>. This allows us to get a parametrization of the double distribution, from which we calculate the TDA. Our results are shown on Fig. <ref>. Note that contrary to the GPD case, no symmetry in ξ can help improve the numerical computations. In our model, the [-1,-ξ] region is also contributing to the amplitude thanks to the symmetry relation between the A^π^+γ_u(↑↓) contribution and the A^π^+γ_d(↑↓) one: A^π^+γ_u(↑↓)(x,ξ,u) = A^π^+γ_d(↑↓)(-x, ξ,u), § CONCLUSION We have presented here our preliminary result in our attempt to evaluate the measurability of Sullivan Backward DVCS at existing and future facilities. As expected, the formalism developed in the case of forward Sullivan DVCS can be adapted to the backward case, with though a few additions and complications. Yet, we demonstrated how the amplitude can be assessed in a simple LFWFs model. Before computing the amplitude itself, one needs to take into account of the second contribution to the TDA, with aligned quark helicities. We foresee no additional difficulties, and we expect to obtain the backward DVCS amplitude soon, after evolving the TDA to scales relevant for current and future experimental facilities. Refinement can be envisioned, such as NLO corrections, and other processes, like TCS, could be addressed. Last but not least, replacing the produced π^+ meson by a longitudinally polarized ρ^+ meson will test the vector π→γ TDA. We acknowledge useful discussions with Maxime Defurne, Kirill Semenov-Tian-Shansky and Lech Szymanowski. This research was funded in part by l’Agence Nationale de la Recherche (ANR), project ANR-23-CE31-0019 and by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) – Finance Code 001. For the purpose of open access, the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. JHEP § SUM OF CONTRIBUTIONS this section should be deleted in the final version and is here only for reaching agreement on eq. (<ref>). 𝒜_L^π^+ = e ∑_q' e_q'𝒜_L^π^+;q' = 16 πα_s e^2/9Q∑_q' e_q'dz Φ_π^+(z) ∫dx C_F^qq'(x,z,ξ) A_q'^π^+;γ(x,ξ,u) = 16 πα_s e^2/9Q∫dz Φ_π^+(z) [ e_d ∫_-ξ^1 dx (1/z̅e_u/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)) . + . e_u ∫_-1^ξdx (1/z̅e_d/ξ-x-iϵ A_u^π^+;γ(x,ξ,u)-1/ze_u/ξ+x-iϵ A_u^π^+;γ(x,ξ,u)) ] However, from our previous computations, we have: A_u^π^+;γ(x,ξ,u) = A_d^π^+;γ(-x,ξ,u) Using this symmetry we obtain: 𝒜_L^π^+ = e ∑_q' e_q'𝒜_L^π^+;q' = 16 πα_s e^2/9Q∫dz Φ_π^+(z) [ e_d ∫_-ξ^1 dx (1/z̅e_u/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)) . + . e_u ∫_-1^ξdx (1/z̅e_d/ξ-x-iϵ A_d^π^+;γ(-x,ξ,u)-1/ze_u/ξ+x-iϵ A_d^π^+;γ(-x,ξ,u)) ] = 16 πα_s e^2/9Q∫dz Φ_π^+(z) [ e_d ∫_-ξ^1 dx (1/z̅e_u/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)) . + . e_u ∫_-ξ^1 dx (1/z̅e_d/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)-1/ze_u/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)) ] = 16 πα_s e^2/9Q∫dz/zΦ_π^+(z) [ e_d e_u ∫_-ξ^1 dx (1/ξ-x-iϵ +1/ξ+x-iϵ)A_d^π^+;γ(x,ξ,u) . - . ∫_-ξ^1 dx (e_d^2/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)+e_u^2/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)) ] This does not provide good simplifications. Instead let us define: A_+^π = 1/2(A_u^π^+;γ(x,ξ,u) + A_d^π^+;γ(x,ξ,u)), A_-^π = 1/2(A_u^π^+;γ(x,ξ,u) - A_d^π^+;γ(x,ξ,u)) which are respectively even and odd in x. Then, 𝒜_L^π^+ = e ∑_q' e_q'𝒜_L^π^+;q' = 16 πα_s e^2/9Q∫dz/zΦ_π^+(z) ∫_-1^1 dx (2e_d e_u/ξ-x-iϵ A_+^π^+;γ(x,ξ,u)-1/ξ+x-iϵ(e_d^2 A_d^π^+;γ(x,ξ,u)+e_u^2 A_u^π^+;γ(x,ξ,u)) ) = 16 πα_s e^2/9Q∫dz/zΦ_π^+(z) ∫_-1^1 dx [2e_d e_u/ξ-x-iϵ A_+^π^+;γ(x,ξ,u)-e_d^2 + e_u^2/ξ+x-iϵA_+^π^+;γ(x,ξ,u) . . - 1/ξ+x-iϵ1/2(e_d^2 (A_d^π^+;γ(x,ξ,u)-A_u^π^+;γ(x,ξ,u))+e_u^2 (A_u^π^+;γ(x,ξ,u)-A_d^π^+;γ(x,ξ,u))) ] = 16 πα_s e^2/9Q∫dz/zΦ_π^+(z) ∫_-1^1 dx [-(e_u-e_d)^2/ξ-x-iϵ A_+^π^+;γ(x,ξ,u)-e_u^2-e_d^2/ξ+x-iϵA_-^π^+;γ(x,ξ,u) ] § SUM OF CONTRIBUTIONS this section should be deleted in the final version and is here only for reaching agreement on eq. (<ref>). 𝒜_L^π^+ = 16 πα_s e^2/9Qdz Φ_π^+(z) ∫dx C_F^qq'(x,z,ξ) ( e_u A_u^π^+;γ(x,ξ,u) + e_d A_d^π^+;γ(x,ξ,u)) = 16 πα_s e^2/9Q∫dz Φ_π^+(z) [ e_d ∫_-ξ^1 dx (1/z̅e_u/ξ-x-iϵ A_d^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_d^π^+;γ(x,ξ,u)) . + . e_u ∫_-1^ξdx (1/z̅e_u/ξ-x-iϵ A_u^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_u^π^+;γ(x,ξ,u)) ] However, from our previous computations, we have: A_u;↑↓^π^+;γ(x,ξ,u) = A_d;↑↓^π^+;γ(-x,ξ,u) Using this symmetry we obtain: 𝒜_L^π^+ = 16 πα_s e^2/9Q∫dz Φ_π^+(z) [ e_d ∫_-ξ^1 dx (1/z̅e_u/ξ-x-iϵ A_d;↑↓^π^+;γ(x,ξ,u)-1/ze_d/ξ+x-iϵ A_d;↑↓^π^+;γ(x,ξ,u)) . + . e_u ∫_-1^ξdx (1/z̅e_u/ξ-x-iϵ A_d;↑↓^π^+;γ(-x,ξ,u)-1/ze_d/ξ+x-iϵ A_d;↑↓^π^+;γ(-x,ξ,u)) ] = 16 πα_s e^2/9Q∫dz Φ_π^+(z)/z[ ∫_-ξ^1 dx (e_u e_d-e_u e_d/ξ-x-iϵ A_d;↑↓^π^+;γ(x,ξ,u)+e_u^2-e_d^2/ξ+x-iϵ A_d;↑↓^π^+;γ(x,ξ,u)) ] = 16 πα_s e^2/9Q∫dz/zΦ_π^+(z) [ ∫_-ξ^1 dx e_u^2-e_d^2/ξ+x-iϵ A_d;↑↓^π^+;γ(x,ξ,u)] If I have not made any mistake, this yields the imaginary part of the amplitude to be zero, as A_d;↑↓^π^+;γ(-ξ,ξ,u) = 0.
http://arxiv.org/abs/2407.03120v1
20240703140553
Theory of spin and orbital charge conversion at the surface states of Bi_{1-x}Sb_x topological insulator
[ "Armando Pezo", "Jean-Marie George", "Henri Jaffrès" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Laboratoire Albert Fert, CNRS, Thales, Université Paris-Saclay, 91767, Palaiseau, France Laboratoire Albert Fert, CNRS, Thales, Université Paris-Saclay, 91767, Palaiseau, France Laboratoire Albert Fert, CNRS, Thales, Université Paris-Saclay, 91767, Palaiseau, France § ABSTRACT Topological insulators are quantum materials characterized by Time-reversal protected surface states (TSS) which make them appealing candidates for the design of next generation of highly efficient spintronic devices. The very recent observation of large transient spin-charge conversion (SCC) and subsequent powerful THz emission from Co|Bi_1-xSb_x bilayers clearly demonstrates such potentiality and feasibility for the near future. Amongst the exotic properties appearing in and at the surface of such quantum materials, spin-momentum locking (SML) remains as a key ingredient to effectively convert the spin degree of freedom into a charge or a voltage signal. In that sense, in this work we will provide some clear theoretical and numerical insights implemented by multiorbital and multi-layered tight-binding methods (TB) to clarify our recent experimental results obtained by THz-TDS spectroscopy. Spin- and orbital-charge conversion at the surface states of Bi_1-xSb_x Topological insulator Henri Jaffrès July 8, 2024 ============================================================================================= Introduction: The theoretical proposals made almost two decades ago for exotic materials displaying an insulating bulk with metallic surfaces states <cit.> led quickly to their experimental observation by measuring the spin Hall conductance in HgTe/CdTe quantum wells <cit.> and more recently in two-dimensional materials like bismuthene <cit.>. The large Spin Orbit Coupling (SOC) in Bi-based materials, makes them ideal candidates for spintronic and valleytronic applications <cit.>, furthermore, the so-called band inversion mechanism responsible for the emergence of its topological properties, it's only possible to occur due to its large SOC. Among such properties, the Bulk-Boundary correspondence <cit.>, relates their topological classification to the existence of spin-polarized surface states <cit.> displaying a strong spin-momentum locking, preventing back scattering as long as disorder does not break heavily Time reversal symmetry <cit.>, desirable for new generation spintronic devices. Indeed, recently it was experimentally probed the large spin-to-charge conversion signals in TI's, which have been proved to be more efficient than usual Heavy metals as Pt and W. In particular, one promising material came to be Bi_1-xSb_x alloys where by means of THz Time-Delay spectroscopy measurements, it was successfully demonstrated that the particular six-fold symmetric spin-momentum locking provided by its topological surfaces states, enables to reach large values for the spin-to-charge conversion that take place in Co/Bi_1-xSb_x <cit.>. On the other hand, the rising of the orbital angular momentum from electronic quasiparticles as a new degree of freedom has gained a lot of attention in recent years <cit.>, mainly to the possibility of getting a proficient spin current manipulation without the restricting requirement of having materials with large SOC. In such scenario, it was postulated that the Orbital Hall effect arises from the orbital texture present, even, in centrosymmetric materials <cit.> such that the Orbital Hall conductivity can reach even larger values that those expected for the spin Hall effect in heavy elements as Pt and W <cit.>. Although the inclusion of this new ingredient brings new possibilities as it broadens the set of materials considered for spintronics, it also poses new challenges due the intrinsic entanglement with the spin degree of freedom. In this regard, one way to diminish the role of SOC is to consider light materials enough as it was recently proved in Cu/O_x <cit.>. In this work, we aim to disentangle both spin and orbital to charge conversion in Bi_1-xSb_x. To do so, we use a modified surface matrix elements Hamiltonian developed to capture the experimental ARPES measurements probed previously in TDS experiments for Co/BiSb <cit.>, whereby means of a tight-binding parameterized Hamiltonian <cit.>, such that the electronic properties of the alloy are obtained from the Virtual Crystal Approximation (VCA) <cit.>. This approach was used to demonstrate how the non-trivial topological phase arises for a given x concentration region <cit.>. From the transport perspective, there are two possible effects able to transform spin currents to charge currents and vice versa. On one hand, we have the spin Hall effect whereby applying an electric field a transverse spin current generation arises from the Berry curvature which is a material's band structure property, such a quantity is calculated using σ_xy^z=e/ħ V∑_𝐤,n≠ m⟨m,𝐤|ĵ_̂ŷ^̂ẑ|n,𝐤|⟨%s|%s⟩⟩n,𝐤|v̂_x|m,𝐤/(ε_m-ε_n)^2 f_𝐤,n, where ĵ_̂ŷ^̂ẑ=1/2{v̂_y,ŝ_z} is the spin-current operator given in terms of the anti-commutator of the velocity operator v̂_y and the spin operator ŝ_z, v̂_x is the velocity operator aligned with the electric field, both of them evaluated for Bloch states |n,𝐤⟩ with corresponding eigenvalues ε_n, f is the Fermi-Dirac distribution function with e and V representing the electron unit charge and unit cell volume respectively. In order to extend the calculation, by means of Eq. <ref>, to the Orbital Hall conductivity, we replace 𝐬_z to the orbital operator 𝐋_z, which in this case is expressed in term of the basis {s,p_x,p_y,p_z}⊗{↑,↓}. The second way that would lead to a spin-to-charge conversion is known as the Rashba-Edelstein Effect (REE), where inversion symmetry breaking at a given interface allows for the electric manipulation of spin currents by creating a spin accumulation out-of-equilibrium directly linked to the spin-momentum locking provided by the Rashba effect. These two effects are usually attributed to different origins being the former connected to the bulk (intrinsic) whereas the latter arises solely at the interface (extrinsic) and are complementary in the description of several effects as the Spin-Orbit torque and its reciprocal effect known as the pumping effect. For quantifying the Inverse REE (IREE), which accounts for the generation of a charge current as a result of spin injection, we use the following relation <cit.> Λ_xy= ∑_n,𝐤⟨ŝ_y|_⟩n,𝐤⟨v̂_xτ|_⟩n,𝐤(∂ f_n,𝐤/∂ E_n,𝐤)/∑_n,𝐤⟨v̂_x|_⟩n,𝐤(∂ f_n,𝐤/∂ E_n,𝐤), being both velocity v̂_x and spin ŝ_y operators acting on a given state with band n and momentum 𝐤 weighted by the derivative of the Fermi distribution function and τ is the momentum relaxation time. In this sense, the IREE is characterized by a length. Using Eq. <ref> we calculated the spin and orbital Hall conductivities depicted in Fig. <ref>. In order to validate our TB parametrization we compare the intrinsic response coming calculated through DFT (a) and the parametrized TB Hamiltonian (b). The intrinsic effect is in agreement with previous calculations <cit.> reaching moderate values within the energy window depicted in <ref> (b) where three different components of the SHC tensor are shown. To complement our analysis of the intrinsic effect, namely, the ISHE, we also extend our analysis to the case when there's a contribution arising in the slab geometry, for that, we have calculated the spin Hall conductivity along the non-periodic direction following <cit.> and shown explicitly within the Supplementary material. Our simulations reveal the strong impact of the thickness in the case of the ISHE, such that there is a direct relation between the thickness of the sample and the calculated layer-resolved projection of the SHC. This is in agreement with the conclusions drawn in <cit.> where it was concluded that the main contribution to the spin-to-charge conversion in thin films displays a strong surface nature. In contrast, by considering Eq. <ref>, we obtained the IREE coefficients which seem to display largest values for the orbital contribution rather than the spin one, although reaching their maximum values in different energies within the [0.1,0.4] eV window. For the case of the IREE, we also addressed the layer behaviour of the response by showing the spin and orbital calculations for the first three layers near to the interface, being the 1 BL that one corresponding to it. From previous results, it is already known that in the case of Bi_1-xSb_x, the main contribution to the spin-to-charge conversion arises from the topological surface states, that is, the relevant quantity for explaining the experimental results are depicted in Fig. <ref> (b), where we also note that the orbital part decays faster than its spin counterpart, e.g., the IREE orbital lengths corresponding to the second and third layers are smaller than the IREE spin, even when the first layer presents the opposite behaviour. The spin-to-charge conversion taking place in Bi_1-xSb_x was previously explored <cit.> where the origin of the THz emission was rooted to the surface states appearing in the topological non-trivial phase. We contrast the emergence of the spin and orbital textures at the Fermi surface in Fig. <ref> (a) and (b) respectively. Similarly, the signatures of the Rashba-like dispersion are shown in (b) and (c), which coincides with a vertical cut at k_y=0 in the previous plots commented above. We note that the spin and orbital textures reach opposite values within the energy windows depicted in Fig. <ref>, but opposite to the spin projection, the orbital one resides strongly near the Γ point. This difference between spin and orbital behaviour is related to the Rashba-Edelstein effect depicted in Fig. <ref> (b) where the Orbital response, in magnitude, comes to a value almost as twice as that corresponding to its spin counterpart. Now we move to the purely orbital driven effect. For access the role of the orbital degree of freedom, we decompose the IREE by considering the following operators P̂_± 1=±L̂_y/2(αL̂_y) ; P̂_0=Î-L̂_y^2, where each operator P̂ acts on the Bloch states projecting them onto the p_x, p_y and p_z orbitals and Î is the identity. Another advantage of such a decomposition is that we can relate the Orbital-to-charge contribution to the spin one by simply adding up all the three projectors by noting Î=P̂_-1+P̂_0+P̂_1, Furthermore, what is referred to as the P̂_0 should not give any contribution whenever the system is symmetric with respect to the x̂ axis. Indeed, from Fig. <ref>, we clearly see how ⟨ŝP̂_±1|$⟩ and⟨v̂_x|$⟩ projections are distributed throughout the first Brillouin zone for a given energy near to the Fermi level, while the P̂_0 projection (not shown) gives negligible contribution. In summary, we have described the role of the spin and orbital-to-charge conversion in Bi_1-xSb_x topological insulator. By extending the analysis of the spin transport to the orbital one, we were able to quantify both bulk spin and orbital contributions to the IS(O)HE, pointing out the small values of the total response which agrees with the experimental findings. We therefore provide a detail analysis of the Rashba-Edelstein effect arising from both degrees of freedom. In this last case, we disentangle the orbital contributions that lead to charge conversion such that, in terms of the orbital polarization projector, there are mainly two orbital components when the system is symmetric with respect to the axis. Such a decomposition allowed us to enhance a pure orbital part arising when such symmetry is broken by an in-plane magnetic exchange field. This study has been supported by the French National Research Agency under the project ’ORION’ ANR-20-CE30-0022-02, the project 'DYNTOP' ANR-22-CE30-0026 and by a France 2030 government grant managed by the French National Research Agency PEPR SPIN ANR-22-EXSP0007 (SPINMAT).
http://arxiv.org/abs/2407.02161v1
20240702111057
A Tax-Subsidy Scheme for Efficient Investment in Renewable Generation Capacity
[ "Mohammad Reza Karimi Gharigh", "Lamia Varawala", "Mohammad Reza Hesamzadeh", "György Dán" ]
eess.SY
[ "eess.SY", "cs.SY" ]
defcounter A_equation equation-1 defcounter A Tax-Subsidy Scheme for Efficient Investment in Renewable Generation CapacityMohammad Reza Karimi Gharigha, Lamia Varawalab, Mohammad Reza Hesamzadehc, and György Dánd 3ptaCorresponding author. KTH Royal Institute of Technology, Stockholm, Sweden. E-mail: mailto:mrkg@ee.kth.semrkg@ee.kth.se.3ptbKTH Royal Institute of Technology, Stockholm, Sweden. E-mail: mailto:varawala@kth.sevarawala@kth.se.3ptcKTH Royal Institute of Technology, Stockholm, Sweden. E-mail: mailto:mrhesa@kth.semrhesa@kth.se.3ptdKTH Royal Institute of Technology, Stockholm, Sweden. E-mail: mailto:gyuri@kth.segyuri@kth.se.Formation Mission Design for Commercial Aircraft using Switched Optimal Control Techniques María Cerezo-Magaña, Alberto Olivares, Ernesto Staffetti Universidad Rey Juan Carlos, Department of Telecommunication Engineering, Fuenlabrada, Madrid, Spain (e-mail: maria.cerezo@urjc.es; alberto.olivares@urjc.es; ernesto.staffetti@urjc.es). July 8, 2024 ========================================================================================================================================================================================================================================================================= The impact of energy production significantly affects system sustainability, which has enabled a shift towards renewable energy sources. Thus, producer behavior is crucial in electricity markets to achieve sustainability goals. In this paper, we address two key challenges comprising electricity markets and generation investment. Firstly, electricity markets typically are operated with competitive market clearing and merit-order dispatch, which neglects negative externalities from pollution. A Pigouvian tax is proposed in order to investigate the impacts of these externalities on electricity prices and resolve this issue. Secondly, renewable energy sources entail low operational costs, which result in lower system prices and reduced profits for producers. Furthermore, producers face high investment costs when moving into renewable energy resources, which leads to strategic investment decisions. In order to mitigate this strategic behavior, subsidies are proposed equal to producers' contribution to consumer surplus. These subsidies incentivize producers to decrease prices and increase consumer surplus, so, producers would be motivated to invest in socially optimal generation capacity. Finally, we demonstrate that implementing the proposed tax and subsidy does not increase the regulator's information burden.Renewable energy, electricity generation capacity, market power, environmental externalities, incentives. D43, D62, H21, H23, L11, L13, O21, Q41 § INTRODUCTION §.§ Motivation Electricity generation significantly contributes to environmental pollution <cit.> and according to the measurements, power plants emit substantial carbon by consuming fossil fuels. However, global environmental sustainability goals endeavor to utilize cleaner electricity generation, which includes those by the United Nations <cit.>. For example, in Britain, the Office of Gas and Electricity Markets (Ofgem) aims for zero pollution by 2050 <cit.>. Indeed, British power systems are shifting toward renewable sources <cit.>, which reflects broader European Commission advocacy for market mechanisms to reduce emissions <cit.>. We face a context where most governments have introduced auctions for Contract-for-Differences (CfDs), Power Purchase Agreements (PPAs), and capacity payments to ensure efficient investment in renewable generation capacity. The CfD contracts are used to hedge the risk for producers against the volatility of carbon prices in the future, as implemented by California's cap-and-trade program <cit.>. On the other hand, PPAs are contracts between an Independent System Operator (ISO) and an off-taker, which ensure a profitable electricity price for renewable energy resources for a period of 3 to 5 years. This type of hedge contract is implemented in some EU countries such as Spain, Germany, and France <cit.>. Finally, capacity payment is another approach to support clean energy and to leverage sustainable development. In this scheme, the system operator pays money to each producer based on their capacity, even if they do not produce electricity; this is implemented by the UK and Sweden, for instance <cit.>. Besides these market policies, carbon markets and emission regulations have also been introduced to accelerate the energy transition in one form or another. Carbon futures could hedge the risk for participants through this market. However, given the fact that negative externalities do not have a straightforward relation to the output level of generation, it might be challenging to design an optimal hedge contract that would hedge the risk of participants against the volatility of carbon prices in the carbon market. For this reason, regulators proposed carbon taxes or carbon prices to penalize outputs, leading to that renewable resources enjoy several forms of subsidy support. These new carbon markets and emission regulations, through different tax-subsidy schemes, motivate reforms in spot-market design. According to the above motivation, there have been some endeavors among scholars to address this issue using different approaches. §.§ Literature review The Independent System Operator (ISO) manages technical aspects and performs competitive market clearing in liberalized electricity markets. Based on supply and demand curves, generation prices are determined from merit-order dispatch considering power-system constraints <cit.>. Constraints regarding negative externalities are not considered in generation prices in a competitive market. There are two types of policies in order to address externalities, particularly carbon emissions: cap-and-trade and carbon tax. These policies aim to reduce carbon emissions. Currently, the carbon futures market is responsible for more than 90% of total carbon trade, indicating that participants use this market to hedge their risks <cit.>. On the other hand, some countries, such as China, have a successful carbon trade market and are planning to implement a carbon tax scheme <cit.>. This decision might result from the inefficiency of the cap-and-trade mechanism in the long-term operation of the electricity market, particularly in terms of capacity investment. It is shown that if we impose some price control in cross-border trade, it could easily lead to a reduction in the investment of renewable energy resources. <cit.> tried to combine a price control mechanism with carbon policies, and they showed that a carbon tax with a price control mechanism may lead to an increase in investment in renewable energy resources without the aforementioned negative impact of the price control mechanism. However, implementing a carbon tax poses challenges because the total pollution levels do not directly correlate with individual generation levels, which makes it difficult to allocate the exact amount of these negative externalities. In order to overcome these challenges, some carbon tax schemes have been developed. For instance, carbon taxes <cit.> or carbon allowances <cit.> are used separately and can be traded in exclusive markets. ISOs optimize the market outcomes considering wind or solar generation units <cit.> or hydro units in systems with hydro reservoirs <cit.>. Thus, ISOs could address externalities as the part of market clearing mechanism by imposing a Pigouvian tax on producers, effectively resolving this issue. The ISO can impose taxes on non-renewable energy sources with higher levels of pollution, which make these energy resources more expensive. This could resolve the lack of incentive for producers to invest in renewable energy sources. But, renewable energy sources consume free energy sources such as wind and solar to produce electricity, leading to decrease market clearing prices. As a result of reducing profits for producers, some producers therefore might withhold generation capacity or falsely declare higher costs to decrease generation, thereby increasing electricity prices and overall profits <cit.> and <cit.>. For example, <cit.> proposed an algorithm to identify whether a generator or producer is exercising withholding of generation levels. Indeed, their algorithm shows whether they reduce their capacity for economic incentives or technical issues through their validation within the Swedish electricity market. Authors in <cit.> discuss the generation flexibility and the potential for generators to exercise market power using their ramp-rate capabilities. In the European Union <cit.>, there are policies in place to force producers to generate at their full capacity and declare their true costs, which protects consumers from higher prices. Besides, price caps have also been proposed to mitigate strategic behavior <cit.>. <cit.> showed that a carbon price could lead to increased carbon emissions in a small power system comprising two nodes and two generators. Thus, the scheme of carbon charges imposed on each producer or generator is very substantial because it can easily lead to a system with a higher pollution level. Besides, <cit.> considers the problem of pollution and strategic behavior together by modeling pollution as a decreasing function of pollution volume. <cit.> proposed an incentive framework to prevent strategic behavior related to pollution while considering transmission lines and nodal pricing in their mechanism. Similar to strategic generation behavior, the system would suffer from strategic investment for the same reason and with the same effect, leading to an increase in electricity prices <cit.> and <cit.>. Indeed, shifting from traditional fossil-based power plants towards renewable energy plants requires high investment costs, which disincentivizes producers from adopting this generation technology. Consequently, strategic investments would occur in generation capacity. While strategic generation behavior can be prevented through policy measures, strategic investments cannot be prevented by similar policies. Therefore, the best solution is to turn to incentives. Some researchers have proposed a market mechanism with subsidies for monopolistic producers to incentivize them to maximize social welfare without detailing their cost functions <cit.>. For example, <cit.> proposed a subsidy scheme based on their two-stage formulation for the investment cost of batteries in order to ensure that investors invest in the right mixture and amount of generation in the right locations, which maximizes the social welfare of participants in the long term. They showed how this subsidy could change the mixture of generation into carbon-free electricity systems through their simulation of the Israeli power system. Moreover, some scholars have extended this approach by using locational spot prices to determine consumer utilities in electricity markets. However, the entire surplus as a subsidy poses funding challenges in practice <cit.>. Besides, <cit.> showed that allocating the right strategic reserves in the electricity market, compared to the capacity mechanism, especially for renewable energy resources, could incentivize producers to invest in carbon-free energy resources, such as those in Sweden. However, as discussed by <cit.>, the power system needs to have some conventional generators in order to meet its demand at all times in the long term. As a result, they suggested a mixture of renewable energy resources with fuel-based technology, such as hydrogen, in their optimal mixture of generators. This mixture would align with zero carbon emissions if hydrogen is generated by renewable energy resources, which is called green hydrogen. §.§ Contributions At this research background, the current paper contributes to the relevant literature as follows: First, it proposes a tax-subsidy scheme for producers based on their marginal contribution to consumer surplus. The proposed scheme induces social-welfare decisions and it considers discrete generation technologies and costs[This is inspired by the HRGV mechanism proposed in <cit.> and studied in <cit.>]. Second, the introduced tax-subsidy scheme addresses pollution externalities and strategic investment in its proposed formulation. It also respects the operational limits of power systems. Third, a detailed analytical example and a set of comprehensive numerical experiments are provided to carefully explain the operation of the tax-subsidy scheme. The rest of the paper is organized as follows. In Section <ref>, we present our model of the power system including buses, transmission lines, generators and pollution. Then, we compare the socially optimal spot market generation outcomes, which include consideration of pollution, to the competitive spot market clearing. Finally, in this section, we derive the tax part of the scheme which aligns the competitive market clearing with the socially optimal outcome. In Section <ref>, we model how investment in generation capacity would affect the social welfare created in the spot market and compare the producers' profit maximizing investment decisions to what is socially optimal. To align the producers' profit maximization with the social welfare maximization, we derive the subsidy part of the scheme. In Section <ref>, we present the properties of the tax and subsidy scheme and in Section <ref> and Section <ref>, we present an analytical example and case study for illustrating the strength of the proposed scheme. Finally, we conclude in Section <ref>. § GENERATION IN WHOLESALE POWER MARKETS §.§ Socially optimal spot market generation We consider a wholesale electricity market over a power system with a set 𝒩 of buses connected to each other by a set ℒ of transmission lines. The market participants comprise a set ℐ of producers forming an oligopoly, along with a large set of consumers modeled as the aggregated utility functions. Each producer has a set 𝒥 of generating technologies, where units employing each technology may be present at multiple buses. We use indices n, l, i, and j to refer to individual buses, lines, producers, and technologies, respectively. For simplicity, these variables vary over positive integer numbers (e.g., n∈ℕ,1≤ n≤|𝒩|). Additionally, we consider dynamically varying constraints and the social welfare maximization problem in the electricity spot market over a planning horizon of T dispatch intervals. We use index t to represent a single dispatch interval, where t∈ℕ,t≤ T. Additionally, we demonstrate the generated power output of producer j using technology j at bus n during time interval t as q_tinj and the corresponding generation capacity as k_inj. We assume that the generation capacity remains constant throughout the planning horizon. Specifically, for renewable energy sources like wind power plants, there exists an availability factor denoted by A_tinj, which represents the portion of available capacity during time interval t based on factors such as wind speed. Thus, we define the relative generation by r_tinj as the fraction of available capacity utilized for generation. Thus, the output power of these technologies would be represented as (<ref>). q_tinj = r_tinj A_tinj k_inj↔ω_tinj, ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥 where ω_tinj is the associated Lagrangian multiplier. The generation output must be positive and less than its corresponding capacity, which is represented by (<ref>). 0 ≤ r_tinj≤ 1 ↔ ( μ_tinj, ν_tinj ), ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥 Where μ_tinj and ν_tinj are the associated Lagrangian multipliers, each generation technology may also have its ramping limit denoted by R_inj. This limit constrains the relative output change between two neighboring time intervals. We would represent this limit as (<ref>), where ρ_tinj and σ_tinj are two Lagrangian multipliers associated with the upper and lower limits of the ramping rate. -R_inj≤ r_tinj - r_t-1inj≤ R_inj↔ ( ρ_tinj,σ_tinj ), ∀ t ∈ℕ, 2 ≤ t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. Basically, power generation is associated with the cost and pollution for the producers related to the technology they employed. The generation cost function and the amount of pollution created by power output at bus n for producer i with technology j at output level q_tinj are represented as C_inj(q_tinj) and x_tinj(q_tinj), respectively. We assume that these two functions are static over the planning horizon, non-negative, non-decreasing, and convex with respect to q_tinj. As a result of these assumptions, these functions must be piece-wise first and second differentiable in q_tinj. They exhibit only jump discontinuities. Also, environmental pollution results in a negative externality, which is a function of the total pollution at each bus. The negative externality function at bus n is denoted by[In this paper, we adopt the convention that omitting any index implies a summation over that index e.g. x_tin=∑_j∈𝒥x_tinj.]E_n(x_tn) and is assumed to be static over the planning horizon. E_n(x_tn) is non-negative, non-decreasing and convex in x_tn. Since E_n(x_tn) is non-decreasing and convex, it must be piece-wise first and second differentiable in x_tn with only jump discontinuities. The power consumption in interval t at bus n is represented as d_tn, where it is expected to be positive. We denote this representation as constraint (<ref>). Correspondingly, the utility generated by a consumer in interval t at bus n through consumption d_tn is denoted by Ũ_tn(d_tn). Notably, Ũ_tn(d_tn) is non-decreasing and concave with respect to d_tn. Consequently, Ũ_tn(d_tn) can be modeled as a piece-wise first and second differentiable function in d_tn, exhibiting only jump discontinuities. 0 ≤ d_tn≤ D_tn↔ ( α_tn,β_tn ), ∀ t ∈ℕ, t ≤ T, ∀ n ∈𝒩. According to the power balance equation, the sum of power generated by producers is equal to the sum of power consumed by consumers in each time interval t. We represent this relationship as equation (<ref>). ∑_n ∈𝒩 d_tn - ∑_n ∈𝒩 q_tn=0 ↔ ( λ_t ), ∀ t ∈ℕ, t ≤ T. For transmission line l and bus n, we determine the flow of the line by considering the power generated by producers and the power consumed by consumers, using the power transfer distribution factor matrix (H_ln) with a linear expression <cit.>. The flow limitation of line l during time interval t is denoted by equation (<ref>). -F_l ≤∑_n∈𝒩 H_ln( d_tn - q_tn)≤ F_l ↔ ( τ_tl,ζ_tl ), ∀ t ∈ℕ, l ∈ℒ. In this paper, we represent vectors by square brackets with the running index as a subscript instead of using the set builder notation, e.g. [q_tinj]_tinj:=(q_tinj|t∈ℕ,t≤ T,n∈𝒩,i∈ℐ,j∈𝒥). Also, we assume that the running index runs over the entire set unless a condition is specified. By considering above constraints and parameters, and we could write social welfare maximization problem over a planning horizon of T as W ( [k_inj]_inj) = max_[ q_tinj, r_tinj]_tinj, [ d_tn]_tn∑_t=1^T ∑_n ( Ũ_tn( d_tn) - ∑_ij C_inj( q_tinj) - E_n ( ∑_ij x_tinj( q_tinj) ) ) subject to the constraints (<ref>) to (<ref>). In the following, we are trying to remove dependency of social welfare maximization problem in respect to d_tn. First, we may separate the demand d_tn out from the social welfare maximization problem by defining an aggregate consumer utility as U_t ( [q_tn]_tn) = max_[ d_tn]_tn∑_n Ũ_tn( d_tn) ∀ t ∈ℕ, t ≤ T subject to the constraints (<ref>) to (<ref>). According to constraint (<ref>), it is clear that ∂ d_t/∂ q_tn=1, Ũ_tn(d_tn) is non-decreasing and concave in d_tn, and ∂ d_tn^'/∂ q_tn≥ 0, thus, U_t ( [ q_tn]_tn) is non-decreasing and concave function in q_tn. By similar deduction, we may deduce that U_t ( [ q_tn]_tn) would be a piece-wise first and second differentiable function in q_tn with only jump discontinuities. Also, in order to overcome the jump discontinuities and to allow the derivative of ∂ U_t([q_tn]_tn)/∂ q_tn with respect to q_tn^' to exist everywhere, we have only considered its right-hand derivative at every [q_tn]_tn. Moreover, we will follow this as a convention for all piece-wise differentiable functions. By using an aggregate utility function with respect to q_tn in (<ref>), the social welfare maximization problem would be W ( [ k_inj]_inj) = max_[ q_tinj, r_tinj]_tinj∑_t=1^T ( U_t ( [ q_tn]_tn) - ∑_n ( ∑_ij C_inj( q_tinj) + E_n ( x_tn) ) ) subject to the constraints (<ref>) to (<ref>) which do not depend on d_tn anymore. According to the Karush-Kuhn-Tucker (KKT) conditions for the above social welfare maximization problem, we obtain (<ref>) and (<ref>) equation, which relate to r_tinj and q_tinj, respectively. In (<ref>), we use the superscript * to represent the optimal values of the optimization variables in (<ref>) and the KKT multipliers of the constraints (<ref>) to (<ref>). A_tinj k_tinjω_tinj^* = -μ_tinj^* + ν_tinj^* - ρ_tinj^* + σ_tinj^* + ρ_t+1inj^* - σ_t+1inj^*, ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥 . ∂ C_inj/∂ q_tinj|_[ q_tinj]_tinj = [ q_tinj^*]_tinj + ω_tinj^* = . ∂ U_t/∂ q_tinj|_[ q_tinj]_tinj = [ q_tinj^*]_tinj-0.6cm0cm - . ∂ E_n/∂ x_tinj∂ x_tinj/∂ q_tinj|_[ q_tinj]_tinj = [ q_tinj^*]_tinj, ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥 It is clear that r_tinj and q_tinj will depend on [k_inj]_inj and consequently, W([k_inj]_inj) therefore depends on [k_inj]_inj. §.§ Competitive spot market clearing The electricity market is managed by the ISO, which determine the market price at each location and time interval. According to <cit.>, the ISO faces information gaps, particularly regarding the specifics of individual producers' technology and pollution outputs that are [q_tinj]_j and [x_tinj]_j, respectively. This leads to difficulties in accurately assessing the environmental impact. Competitive generation levels and pricing are derived from maximizing social welfare, excluding the externality term. We represent these optimal values by q_tn^† and ω_tinj^†, which obtain from (<ref>) by eliminating ∂ E_n/∂ x_tinj∂ x_tn/∂ q_tinj term. The current paper consistently positions producer costs on the left side of the equation (<ref>) for clarity. The generation price at a given location and time interval is determined by the equation (<ref>) minus ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj term, which is negative externalities term. Thus, we obtain price in interval t at bus n by P_tn^†( [k_inj]_inj) = . ∂ U_t ( [ q_tn]_tn)/∂ q_tn|_[q_tn]_tn = [q_tn^†]_tn, ∀ t ∈ℕ, t ≤ T, n ∈𝒩. Since U_t ( [ q_tn]_tn) is non-decreasing in q_tn, P_tn^†( [ k_inj]_inj) is also non-negative. Furthermore, the ISO depends on producers to clearly declare their capacity and cost functions at bus n. Unlike <cit.>, we assume producers cannot act strategically and must truthfully declare these values, operating as price-takers. Hence, the price in (<ref>) is not influenced by [q_tn^†]_n. It is clear that the maximum profits of producer i over the planning horizon of T dispatch intervals in a Cournot equilibrium is [ Y_i ( [ k_inj]_inj) ]_i = max_[ q_tinj, r_tinj]_tnj∑_t=1^T ∑_n ( P_tn^†( [k_inj]_inj) q_tinj - ∑_j C_inj( q_tinj) ) subject to the constraints (<ref>) to (<ref>). Therefore, by Karush-Kuhn-Tucker (KKT) conditions, the producers' profit maximizing would be written as .∂ C_inj/∂ q_tinj|_ q_tinj = q_tinj^† + ω_tinj^† = P_tn^†( [ k_inj]_inj), ∀ t∈ℕ,t≤ T, n∈𝒩, i∈ℐ, j∈𝒥. which differs from the social welfare maximizing condition (<ref>) except it lacks the non-negative term ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj, as designed in P_tn^†. Consequently, as ∂ C_inj/∂ q_tinj increases with q_tinj, competitive generation levels are generally higher or equal to optimal levels, i.e. q_tinj^†≥ q_tinj^* ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. As discussed earlier, C_inj(q_tinj) is a convex, piece-wise second differentiable function in q_tinj. We consider k_inj is large enough number. Under this condition and considering constraints (<ref>) to (<ref>), the value of μ_tinj^† remains independent of k_inj and other KKT multipliers are zero. So, q_tinj^† is also independent of k_inj. On the other hand, as k_inj decreases and becomes small enough, q_tinj^† varies linearly respect to k_inj. So, q_tinj^† is continuous and piece-wise differentiable in k_inj. Notably, Since the objective function in (<ref>) is continuous and piece-wise differentiable in q_tinj, the KKT multipliers of binding constraints and ω_tinj^† from (<ref>) are continuous and piece-wise differentiable in k_inj. Thus, P_tn^†([ k_inj]_inj) is piece-wise differentiable in k_inj. §.§ Proposed tax scheme In this section, we propose a tax scheme in spot market, which maximizes the social welfare. Note that we use * in the superscript to denote optimal values of the optimization variables in the profit maximization under the tax scheme. Similar to the (<ref>), we can deduce that the price of generation in interval t at bus n would be modified to (<ref>) for the social welfare maximizing with generation levels [q_tinj^*]_tinj. P_tn^* ( [ k_inj]_inj) = . ∂ U_t ( [ q_tn]_tn)/∂ q_tn|_[ q_tn]_tn = [ q_tn^* ]_tn, ∀ t ∈ℕ, t ≤ T, n ∈𝒩 It is clear that P_tn^*([k_inj]_inj) in this scheme is non-negative. We assume that the tax imposed in interval t on producer i is represented by ϕ^ti. Consequently, the maximum profits of producer i over the planning horizon of T dispatch intervals, [ Y_i ]_i ( [ k_inj]_inj), would decrease by the amount of tax charged as max_[ q_tinj, r_tinj]_tnj∑_t=1^T ∑_n ( P_tn^* ( [ k_inj]_inj) q_tin - ∑_j C_inj( q_tinj) - ϕ_ti) subject to the constraints (<ref>) to (<ref>). Similarly, the producers' profit maximizing condition is . ( ∂ C_inj/∂ q_tinj + ∂ϕ_ti/∂ q_tinj) |_[ q_tinj]_tinj = [ q_tinj^* ]_tinj-0.7cm0cm + ω_tinj^* = P_tn^* ( [ k_inj]_inj), ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. By comparing (<ref>) and (<ref>), we easily obtain that the optimal tax would be feasible in the following equation ∂ϕ_ti/∂ q_tinj = ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj, ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. By integrating (<ref>) over the box starting from zero for all q_ti to the [q_ti]_ti, the producer i's tax is ϕ_ti - . ϕ_ti|_[ q_tinj]_nj = [ 0 ]_nj = ∑_n ( E_n ( x_tn) - E_n ( x_tn - x_tin) ), ∀ t ∈ℕ, t ≤ T, i ∈ℐ. It is worth mentioning that the producer i's tax in interval t is its marginal contribution to the negative externality due to pollution, i.e., it is a so-called Pigouvian tax<cit.>. By adding the tax to the producers' profit maximizing condition, it is changed into (<ref>). .∂ C_inj/∂ q_tinj|_q_tinj=q_tinj^*+ω_tinj^*+.∂ E_n/∂ x_tn∂ x_tinj/q_tinj|_q_tinj=q_tinj^* = P_tn^*([k_inj]_inj), ∀ t∈ℕ,t≤ T, n∈𝒩, i∈ℐ, j∈𝒥. Observe that the term ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj is now on the left hand side of the equation since, due to the tax, it would be a cost to the generator. While it may appear that only producers are paying the tax, higher prices also reduce consumer surplus because producers control the taxed pollution source which they pay. § GENERATION CAPACITY INVESTMENT IN WHOLESALE POWER MARKETS §.§ Socially optimal generation capacity investment In this section, we try to formulate the problem of socially optimal generation capacity investment. We suppose that the existing capacity at bus n of producer i's technology j is K_inj and the incremental capacity in generation is Δ k_inj. It is clear that Δ k_inj must be positive as Δ k_inj≥ 0 ↔τ_inj ∀ n ∈𝒩, i ∈ℐ, j ∈𝒥 and also the capacity at bus n of producer i's technology j would be the sum of the existing and the increase as k_inj=K_inj+Δ k_inj. Increasing generation capacity incurs an investment cost linked to the technology used. We denoted it as ℭ_inj(Δ k_inj), which associates to the investment cost at bus n for producer i using technology j. Note that, this function is non-negative, non-decreasing, and convex in Δ k_inj. This cost function should be piece-wise first and second differentiable with jump discontinuities. Now by considering social welfare maximization over the investment timescale, it involves summing the social welfare generated in spot market intervals minus investment costs, it could be shown as 𝔚 = max_[ Δ k_inj]_inj W ( [ k_inj]_inj) - ∑_nijℭ_inj( Δ k_inj) subject to the constraint defined by equation (<ref>) and the relationship provided in equation (<ref>). Accordingly, given the modified price in (<ref>), the equation representing the condition for maximizing social welfare is given by (<ref>). . ∂ℭ_inj/∂Δ k_inj|_Δ k_inj = Δ k_inj^* + τ_inj^* = . ∂ W/∂ k_inj|_[ k_inj]_i = [ k_inj^* ]_i which results in .( ∑_t=1^T . ( ∂ C_inj/∂ q_tinj + ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj) |_[ q_tinj]_inj = [ q_tinj^* ]_inj∂ q_tinj^*/∂ k_inj + ∂ℭ_inj/∂Δ k_inj) |_[ Δ k_inj]_i = [ Δ k_inj^* ]_i + τ_inj^* = ∑_t=1^T . P_tn^* ∂ q_tinj^*/∂ k_inj|_[ Δ k_inj]_inj = [ Δ k_inj^* ]_inj, ∀ n ∈𝒩, i ∈ℐ, j ∈𝒥. §.§ Strategic generation capacity investment Producers may manipulate spot market prices by declaring higher generation costs or withholding capacity, which leads to reduced generation levels and increased prices <cit.>, <cit.>. In Section <ref>, we assumed producers do not engage in such behavior, as a result generation level would be competitive generation levels. This assumption is reasonable due to policies preventing strategic behavior in critical infrastructures like power systems <cit.>. However, while the modified price with considering pollution in (<ref>) does not depend on [q_tn^†]_tn, it can be changed by [k_inj]_inj. In a Cournot equilibrium, the maximum profit is achieved by all producers by (<ref>). [𝒴_i]_i=[max_[Δ k_inj]_njY_i([k_inj]_inj)-∑_njℭ_inj(Δ k_inj)]_i subject to the constraint (<ref>) given (<ref>). Thus , from (<ref>) given the tax in (<ref>) the producers' profit-maximizing condition over the investment period in a Cournot equilibrium is .∂ℭ_inj/∂Δ k_inj|_Δ k_inj=Δ k_inj^#+ τ_inj^#=.∂ Y_i/∂ k_inj|_[k_inj]_i=[k_inj^#]_i which results in .( ∑_t=1^T . ( ∂ C_inj/∂ q_tinj + ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj) |_[ q_tinj]_inj = [ q_tinj^* ]_inj∂ q_tinj^*/∂ k_inj + ∂ℭ_inj/∂Δ k_inj) |_[ Δ k_inj]_i = [ Δ k_inj^#]_i + τ_inj^# = ∑_t=1^T . ( ∑_n^'∂ P_tn^'^*/∂ k_inj q_tin^'^* +P_tn^* ∂ q_tinj^*/∂ k_inj) |_[ Δ k_inj]_inj = [ Δ k_inj^#]_inj, ∀ n ∈𝒩, i ∈ℐ, j ∈𝒥. The # superscript denotes the strategic values of optimization variables in producers' profit maximization (<ref>) and the KKT multiplier of the investment constraint (<ref>). Although the social welfare maximization condition is (<ref>), this condition includes the additional term ∑_n^'∂P_tn^'^*/∂ k_injq_tn^'^* compared to optimal investment problem. Hence, strategic capacity increases differ from optimal ones, i.e. Δ k_inj^#≠Δ k_inj^* ∀ n∈𝒩, i∈ℐ, j∈𝒥. §.§ Proposed subsidy scheme In this section, we propose a subsidy scheme aligning producers' profit maximization with social welfare over the investment period, with subsidies provided in the spot market. Denoting the subsidy to producer i in interval t as χ^ti, the maximum profits of all producers in a Cournot equilibrium increase by the subsidy amount. Producers' maximum profit over the investment period becomes the same maximization problem with greater objective function in comparison with problem proposed in Section <ref>. Consequently, the producers' profit-maximizing condition over the investment period in a Cournot equilibrium is: .( ∑_t=1^T . ( ∂ C_inj/∂ q_tinj + ∂ E_n/∂ x_tn∂ x_tinj/∂ q_tinj - ∂χ_ti/∂ q_tinj) |_[ q_tinj]_inj = [ q_tinj^* ]_inj∂ q_tinj^*/∂ k_inj + ∂ℭ_inj/∂Δ k_inj) |_[ Δ k_inj]_i = [ Δ k_inj^* ]_i + τ_inj^* = ∑_t=1^T . ( ∑_n^'∂ P_tn^'^*/∂ k_inj q_tin^'^* +P_tn^* ∂ q_tinj^*/∂ k_inj) |_[ Δ k_inj]_inj = [ Δ k_inj^* ]_inj, ∀ n ∈𝒩, i ∈ℐ, j ∈𝒥. Here, the * superscript denotes expected profit maximization under the subsidy scheme. With the same logic as tax scheme, comparing (<ref>) to the social welfare maximization condition (<ref>), we design the subsidy so that the conditions align with the following condition. . ∂χ_ti/∂ q_tinj|_[q_tinj]_inj = [ q_tinj^* ]_inj∂ q_tinj^*/∂ k_inj = - ∑_n^'∂ P_tn^'^*/∂ k_inj q_tin^'^*, ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. The subsidy must account for the additional term in comparison to (<ref>): . ∂χ_ti/∂ q_tinj|_[ q_tinj]_inj = [ q_tinj^* ]_inj∂ q_tinj^*/∂ k_inj = ∑_n^'( ∂ U_t/∂ q_tn^'∂ q_tin^' j^*/∂ k_inj -∂( P_tn^'^* q_tin^' j^* )/∂ k_inj), ∀ t ∈ℕ, t ≤ T, n ∈𝒩, i ∈ℐ, j ∈𝒥. Upon integration with respect to k_inj, iterating over j values, and finally, over n values, we obtain: χ_ti - . χ_ti|_[ q_tinj]_nj = [ 0 ]_nj = ∑_n ( U_t ( [ q_tn]_n ) -U_t ( [ q_tn - q_tin]_n ) - P_tn^* q_tin) ∀ t ∈ℕ, t ≤ T, i ∈ℐ. Notice that the subsidy in interval t equals the producer's marginal contribution to consumer surplus. From updated [ Y_i ]_i ( [ k_inj]_inj) with subsidy, the producers' maximum profit, including the tax and subsidy is: max_[ q_tinj, r_tinj]_tnj∑_t=1^T ( U_t ( [ q_tn]_n ) - U_t ( [ q_tn - q_tin]_n ) . . - ∑_n ( ∑_j C_inj( q_tinj) + E_n ( x_tn) - E_n ( x_tn - x_tin) ) - . ϕ_ti|_[ q_tinj]_nj = [ 0 ]_nj + . χ_ti|_[ q_tinj]_nj = [ 0 ]_nj) subject to constraints (<ref>) to (<ref>). Notice that all terms in producer i's decisions [q_tinj]_nj are contained in the maximum social welfare problem in (<ref>). This ensures that producers maximize social welfare under this scheme. In comparison with <cit.>, we do not allow false declaration of generation cost function. Yet, both our tax and subsidy schemes are equivalent to theirs. This prevents the potential for producers to exercise strategic behavior through capacity withholding. § PROPERTIES OF THE PROPOSED SCHEME Based on . ϕ^ti|_[ q_nj^ti]nj = [ 0 ]nj and . χ^ti|_[ q_nj^ti]nj = [ 0 ]nj, the proposed scheme exhibits the following properties: * Individually Rational: If the social welfare in interval t does not decrease upon producer i's participation, then the proposed scheme could ensure the individual rationality of producers. Producer i's revenue would be adequate if: . ( χ_ti - ϕ_ti) |_[ q_tinj]_nj = [ 0 ]_nj≥ U_t ( [ q_tn - q_tin]_n ) -U_t ( [ q_tn]_n ) -0.4cm0cm + ∑_n ( ∑_j C_inj( q_tinj) + E_n ( x_tn) - E_n ( x_tn - x_tin) ). Choosing . ( χ_ti - ϕ_ti) |_[ q_tinj]nj = [ 0 ]nj≥ 0 satisfies this condition for all possible values of [ q_tinj]_inj because of U_t ( [ q_tn]_n ), E_n ( x_tn), and x_tn( [ q_tinj]_tn) are non-decreasing and C_inj( q_tinj) is non-negative in term of their variables. * Robust to Information Asymmetry: The scheme is designed to depend only on aggregate values q_tin and x_tin, i.e. . ϕ_ti|_[ q_tinj]_nj = [ 0 ]_nj and . χ_ti|_[ q_tinj]_nj = [ 0 ]_nj, which makes the proposed scheme to be robust to information asymmetry. * Non-discriminatory: If . ϕ_ti|_[ q_tinj]nj = [ 0 ]nj≡. ϕ_t |_[ q_tinj]nj = [ 0 ]nj . χ_ti|_[ q_tinj]nj = [ 0 ]nj≡. χ_t |_[ q_tinj]nj = [ 0 ]nj for every producer i, the functional forms for ϕ_ti and χ_ti would be the same for every producer. We may also choose . ϕ_ti|_[ q_tinj]_nj = [ 0 ]_nj and . χ_ti|_[ q_tinj]_nj = [ 0 ]_nj to satisfy all the two conditions above simultaneously. * Incentive Compatible with Respect to Direct Costs: The proposed scheme ensures that producers declare their costs and parameters accurately, e.g. generation and investment costs, generation capacity, ramping limits, and availability factor. Also, this scheme aligns profit maximization with social welfare maximization, which leads to a burden on the data utilized by the ISO. It is worth mentioning that ramp limits, ρ_tinj and σ_tinj, would affect producers' cost declarations and pollution externality, E_n ( x_tn), via a tax, also change profit maximization problem (<ref>). On the other hand, decreasing pollution externality effects can increase the probability of manipulating pollution levels. Thus, ISOs must measure pollution levels in this condition. Additionally, if the ISO incorporates pollution in the market clearing price (<ref>), producers would strategically increase E_n ( x_tn) or manipulate the price (<ref>) and profits (<ref>), which differs from the goal of maximizing social welfare. * Not Budget Balanced in General: If the scheme results in a net subsidy in any interval t, i.e., ∑_i ∈ℐ( χ_ti - ϕ_ti) ≥ 0, the ISO may suffer from funding issues, that it can be resolved by adjusting fixed fees for market participants. Additionally, for every producer i, . ( ϕ_ti - χ_ti) |_[ q_tinj]_nj = [ 0 ]_nj can be considered as a fixed fee. However, according to the (<ref>), we need to set it to a reasonable number in order for the scheme to have individual rationality. * Producers' Profits Independent of Price: From (<ref>), producers' profits under the scheme are independent of the price, which makes it compatible with price caps and overcoming their drawbacks. § ANALYTICAL EXAMPLE In this section, we demonstrate the developed theory through a practical example. First of all, we examine the spot market timescale, comparing optimal spot market generation with competitive market clearing. We consider a simplified scenario of a one-bus system, denoted as 𝒩={1}, with two producers ℐ={1,2} and one technology 𝒥={1} over a planning horizon of T=3. Since there are no transmission lines in a one-bus system, therefore, there are no constraints on power flow capacities (<ref>). Table <ref> shows the generation cost function, pollution factors, and existing generation capacities for each producer. Firstly, we neglected investment in generation capacity, which means the generation capacities are maintained at their existing levels. Additionally, we set A_ti11=1, R_i11=1, ∀ t ∈{1,2,3}, i ∈{1,2} to simplify generation capacity constraints and to eliminate ramping constraints. By this assumption, the relative generation r_ti11 is eliminated by merging constraints (<ref>) and (<ref>) into 0 ≤ q_ti11≤ k_i11 ∀ t ∈{1,2,3}, i ∈{1,2}. Moreover, the negative externality function that represents pollution is expressed as E_1 ( x_t1) = x_t1. Finally, consumer utility is defined as Ũ_t1( dV ) = c_t d_t1 - d_t1^2/2 ∀ t ∈{1,2,3}, where c_1=6, c_2=12, c_3=20. From the power balance constraint (<ref>), d_t1 = q_t1≥ 0, ∀ t∈{1,2,3}. Thus, the aggregate consumer utility would be U_t1(q_t1)=Ũ_t1(q_t1), ∀ t∈{1,2,3}. Figure <ref> represents the optimal spot market generation for every interval. It is clear that the marginal utility is c_t-q_t1 ∀ t∈{1,2,3}, which represents a solid blue line in all the figures. Also, the marginal costs including the pollution damage for producers i=1 and i=2 are 6 +ω_t111^* and 4 + ω_t211^* (∀ t ∈{ 1,2,3 }), respectively. Considering costs within each generator's capacity limits, i.e., 0<q_ti11<k_i11, the associated KKT multipliers are ω_ti11^*=0;∀ t ∈{ 1,2,3 }, i ∈{ 1,2 }. According to the social welfare maximization in (<ref>), which leads to a least-cost dispatch, producer i=2 is prioritized for generation due to lower costs, with i=1 generating only if i=2 has reached maximum capacity. The marginal cost curves for i=2 and i=1 exhibit jumps at q_t1=k_211 and q_t1=k_211+k_111, respectively, which characterized by the KKT multipliers ω_t211^* and ω_t111^*. Pollution damage is feasible to depict individually for each producer since the marginal pollution damage ∂ E_1/∂ x_t1 is independent of ∂ x_t1, which is represented by the green shaded region. Optimal generation, determined at the intersection of the marginal cost and utility curves, gives maximum social welfare, depicted by the lightly shaded region. The detailed information comprising total generation, pollution damage, and maximum social welfare in each interval is summarized in Table <ref>. Figure <ref> illustrates competitive market clearing where pollution damage costs are ignored, consequently, dispatch relies only on marginal generation cost. For producers i=1 and i=2, marginal costs are ∂ C_i11/∂ q_ti11 + ω_ti11^†, ∀ i ∈{ 1,2 }, ∀ t ∈{ 1,2,3 }. Thus, producer i=1 is prioritized in this case which determines the marginal cost curve for the system and it is as shown in the figures with the dotted line representing producer i=1 where the jump at q_t1=k_111 from 2 to 4 is characterized by the KKT multiplier ω_t111^† and the dashed line representing producer i=2 where the jump at q_t1=k_211 is characterized by the KKT multiplier ω_t211^†. Moreover, pollution damage is shaded darkly in each time step. Similar to the optimal market, optimal generation is determined at the intersection of marginal cost and utility curves. The social welfare is in the lightly shaded region, with price P_t1^† as a solid line. It is worth mentioning that social welfare is negative for t=1 and therefore cannot be represented. The total generation, prices, pollution damage, and social welfare are summarized in Table <ref>. The competitive generation and pollution damage are not less than optimal: q_t11^†≥ q_t11^* and E_1 ( x_t1^†) ≥ E_1 ( x_t1^* ) ∀ t ∈{ 1,2,3 }. However, the competitive social welfare is not greater than optimal. The tax levied on producers i=1 and i=2 to align their profit maximization with the social welfare optimization is obtained by (<ref>) and they are represented in Table <ref>, which is the same as the pollution damage in Figure <ref>. Also, the price following the tax from (<ref>) aligns with optimal spot market generation, labeled as Optimal. Price P_t1^* is solid-lined in Figure <ref> and the price is not lower than the one under competitive market clearing. According to the information presented in Table <ref>, both producers have the same marginal investment cost and are linear functions. Since producer i=2 has a lower marginal cost including pollution damage, only this producer increases generation capacity, i.e. k_111 = K_111=4. In order to compute the optimal generation capacity investment, we first represent the maximum social welfare from the spot market as a function of generation capacity k_111. Social welfare is the area between marginal utility and marginal cost curves. Figure <ref> and Table <ref> provide generation levels for each producer and interval prices. With Δ k_211 small enough, the marginal utility curve intersects the marginal cost curve as in Figure <ref>. So, the total social welfare in the spot market is W ( K_111, k_211) = 68 + 14 × k_211 - 0.5 × k_211^2. The optimal increase in generation capacity from the social welfare maximization condition in (<ref>) is Δ k_211^* = 2. Figure <ref> represents market clearing prices from optimal investment in capacity, which indicates marginal utility, marginal cost, social welfare, and price. The striped region below the price line represents producer i=2's profit from (<ref>) after tax. Table <ref> tabulates each producer's generation, price from (<ref>), tax for producer i=1, maximum profit of producer i=2, and maximum social welfare in the spot market. Now, we are going to determine the strategic generation investment by producer i=2. Firstly, we represent producer 2's profit as a function of generation capacity k_211, which is the area between the price line and segments of the marginal cost curve for producer i=2. Table <ref> provides generation levels for each producer and interval prices. The total profit in the spot market for producer i=2 and the profit maximization condition are Y_2 ( K_111 , k_211) = 14 × k_211 - k_211^2. 9+ τ_111 ^# = 14-2 ×( K_211 + Δ k_211). Given Δ k_211≥ 0 and convex optimization, strategic increase in generation capacity is Δ k_211^# = 0 with KKT multiplier τ_111^# = -1. Strategically, an increase in generation is not equal to optimal increase, i.e. Δ k_211^#≠Δ k_211^*. Table <ref> lists each producer's generation, price from (<ref>), tax for producer i=1, maximum profit of producer i=2, and maximum social welfare in the spot market. The subsidy granted to producer i=2 is χ_t1( q_t211)= 2 + . χ_t1|_ q_1211 = 0 if t=1, 0.5 ( k_211^*)^2 + . χ_t1|_ q_2211 = 0 if t=2, 0.5 ( k_211^*)^2 + . χ_t1|_ q_3211 = 0 if t=3. With the subsidy amount based on corresponding generation levels, the optimal capacity investment occurs. Table <ref> indicates producer i=2's variable subsidy portion χ_t2-.χ_t2|_q_t211=0 for each interval. § NUMERICAL EXPERIMENTS In this section, we evaluate the effectiveness of the proposed tax-subsidy scheme on the IEEE 24-bus test system, which has 33 generators and 38 lines. The system comprises ten producers utilizing six types of technologies including renewable, hydro-power, nuclear, coal, fuel oil, and gas, which are distinguished by their cost function values. Before investment, the total installed generation capacity and load consumption in the first year are equal to 3405 MW and 2850 MW, respectively. Also, the cost functions for generators and utility functions for loads are linearized using piece-wise linear functions consisting of ten segments each. Similarly, the externality cost function is assumed to be linear, with the coefficient of the externality function for each technology type comprised of renewable, hydro-power, nuclear, coal, fuel oil, and gas are assigned to be 0, 0, 0, 90, 95, and 110 ($/MWh), respectively. The reason for choosing these externality coefficients is to adjust the net cost functions of each technology in the following increasing order: renewable, hydro-power, nuclear, coal, gas, and fuel oil. These numbers show the contribution of each producer in a specific area to pollution costs. These costs, imposed on each producer, might be significant because it could change the net merit order list in the spot market. Additionally, the coefficient of utility function and power consumption are assumed to increase by 4 and 2.5 percentage per time step, respectively. Furthermore, the rated power limits of lines are reduced to 70 percentage of their nominal limits in order to experience line congestion in our simulations. Also, each generator is assigned to each of the ten producers in such a way that each producer has different technologies and the total capacity of generators for each producer ranges from 6% to 14% of the total installed capacity of generators. It is worth mentioning that the annual investment cost for each technology is obtained based on its lifetime and total investment costs, which include both fixed and variable investment costs <cit.>. We assume that the investment cost per MW is the same for each technology. Finally, the simulation is conducted over 20 time steps. As the utility functions are linear, we can transform the optimization problem related to utility function maximization with constraints (<ref>) to (<ref>), into a set of linear equations. This set of equations consists of the primal and dual constraints with linearized complementary slackness equations. We can determine the optimal values for load consumption and the Lagrangian multipliers corresponding to constraints (<ref>) to (<ref>) by solving this set of equations, which ultimately leads to have price of each node in power system. Similarly, the optimization problem linked to maximizing profit via investment capacity can be converted into a set of optimality conditions, which enables us to derive the amount of capacity investment for the strategic investment case study. The aforementioned formulation is described in more detail in the Appendix. Finally, we implement the proposed model in Python along with the Gurobi solver. Figure <ref>a shows the average nodal price and social welfare for both optimal and competitive spot market models within twenty-time steps. As discussed in Section <ref>, we have established that the optimal spot market price surpasses that of the competitive spot market, and likewise, the optimal social welfare is greater than under the competitive model. Our results reveal that in the optimal model, total social welfare experiences a 0.23% increase compared to the competitive model. Moreover, Figure <ref>c shows the tax levied on the two producers under the proposed tax-subsidy scheme as a function of time. The figure shows that as the load increases, the tax levied on the two producers changes very differently. The reason for the difference is that producer 2 operates two nuclear units and one coal unit, while producer 4 operates two renewable units and one fuel oil unit. Given the higher externality cost coefficient for the fuel oil unit compared to the coal unit, producer 4 experiences a higher tax changes over time compared to producer 2. Figure <ref>a illustrates the average nodal electricity price and social welfare for both the optimal and strategic investment models. As predicted in Section <ref>, the electricity price obtained from the optimal investment model is higher than that under the strategic investment model. Also, the annual social welfare in the optimal investment scenario is higher than in the strategic investment one. Our analysis shows that under optimal investment, social welfare is 4.36% higher compared to strategic investment. The total generation investment in the optimal and strategic models are 256.57MW and 1937.31MW, respectively. Also, Figure <ref>c displays the profit of producer 10 at each time step. It is clear that the profit of producer 10 increases with the increases of loads in the optimal investment scenario. However, in strategic investment, the profit does not exhibit a similar increase with load consumption due to the strategic game dynamics inherent in the strategic investment model. Additionally, our findings indicate that the total profit of all producers in the optimal investment scenario is greater than the strategic investment by 13.30%. Finally, Figure <ref> illustrates the subsidy paid to producers 2 and 4 at each time step in the optimal investment model. As depicted in the chart, the subsidy value increases with some fluctuation over time. As we observed previously, the profit of producers without our proposed subsidy does not exhibit a meaningful trend. With this subsidy, the profit of each producer might increase with the increment of loads. Thus, the profit of each producer would increase in the final optimal investment framework. § CONCLUSION We identified two main challenges about the growing penetration of renewable energy sources in electricity markets. Firstly, the current competitive market mechanisms cannot consider pollution externalities accurately. We proposed a Pigouvian tax on producers as the function of their externalities that the tax aligns producers' profit maximization with the social welfare maximization problem. This tax leads to encouraging preferences for renewable energy sources in market clearing processes. Secondly, there is a lack of motivation for producers to invest in renewable due to their lower operational costs resulting in reduced electricity prices and ultimately reducing the total profits of producers. To tackle this, we proposed subsidies to producers equivalent to their marginal contribution to consumer surplus. This approach ensures that their profits remain unaffected by electricity prices, and ultimately incentivizes them to social-welfare maximizing investment in renewable energy capacity. Crucially, these proposed tax and subsidy mechanisms can be calculated using existing data within the ISO. apalike § APPENDIX In this section, we developed our proposed model for the numerical experiments for linear utility, generation cost, and externality functions. These functions are achievable by approximating exact functions with their piece-wise linear approximations. Firstly, we develop the mathematical formulation of the competitive electricity market. The competitive electricity market problem is formulated as (<ref>)-(<ref>), where (<ref>) represents the total social welfare of participants without externalities. In this equation, P_d,t^D and P_g,t^G represent the consumption level of demand d at time step t and the generation level of generator g at time step t, respectively. Additionally, b_d,t^D and b_g,t^G are the coefficients of the demand utility function and the generation cost function, respectively. It is worth mentioning that we assume Δ P_g^Gmax is equal to zero for all g ∈𝒩_G in the competitive and optimal markets, which means the investment level is considered to be zero. This variable will be used in the optimal and strategic capacity investment, which leads to an increase in the capacity of generation units in the planning horizon. (<ref>) and (<ref>) represent the limitations of generation output level and consumption level and their associated Lagrange multipliers, respectively. Indeed, the maximum level of consumption for demand d at time step t and the generation capacity of generator g at time step t are represented by P_d,t^Dmax and P_g^Gmax, respectively. The power balance equation and its associated Lagrange multiplier, i.e., electricity price, is formulated as (<ref>), and finally, the line flow limits are shown by (<ref>) where H_g,l^G and H_d,l^D are the Power Transfer Distribution Factors (PTDFs) for generation and demand, respectively. Finally, F_l^max depicts the rated power of line l. W ( Δ P_g^Gmax) = max∑_t ∈ℕ( ∑_d ∈𝒩_D b_d,t^D P_d,t^D - ∑_g ∈𝒩_G b_g,t^G P_g,t^G) subject to 0 ≤ P_g,t^G≤ P_g^Gmax + Δ P_g^Gmax↔ ( μ_g,t^Gmin, μ_g,t^Gmax ), ∀ g ∈𝒩_G, ∀ t ∈ℕ, 0 ≤ P_d,t^D≤ P_d,t^Dmax↔ ( μ_d,t^Dmin, μ_d,t^Dmax ), ∀ d ∈𝒩_D, ∀ t ∈ℕ, ∑_d ∈𝒩_D P_d,t^D = ∑_g ∈𝒩_G P_g,t^G↔ ( λ_t ), ∀ t ∈ℕ, -F_l^max≤∑_g ∈𝒩_G H_g,l^G P_g,t^G - ∑_d ∈𝒩_D H_d,l^D P_d,t^D≤ F_l^max↔ ( μ_l,t^Lmin, μ_l,t^Lmax ), ∀ l ∈𝒩_L, ∀ t ∈ℕ. On the other hand, the optimal market problem is represented with the objective function formulated as (<ref>) subject to (<ref>)-(<ref>), which considers negative externalities through the piece-wise linear externality functions with coefficient e_g,t^G for the generator g at time step t. W ( Δ P_g^Gmax) = max∑_t ∈ℕ( ∑_d ∈𝒩_D b_d,t^D P_d,t^D - ∑_g ∈𝒩_G( b_g,t^G + e_g,t^G) P_g,t^G) Now, we need to develop a formulation to compute the total consumer utility, which is used for determining the electricity price by another optimization problem or a set of equations. This problem is formulated with the objective function as (<ref>) subject to (<ref>)-(<ref>). U_t^PP = max∑_d b_d,t^D P_d,t^D The dual problem of maximizing total consumer utility is as follows: U_t^DP = min∑_d,t P_d,t^Dmaxμ_d,t^Dmax + ( ∑_g ∈𝒩_G P_g,t^G) λ_t - ∑_l ∈𝒩_L( F_l^max + ∑_g ∈𝒩_G H_g,l^G P_g,t^G) μ_l,t^Lmax + ∑_l ∈𝒩_L( F_l^max - ∑_g ∈𝒩_G H_g,l^G P_g,t^G) μ_l,t^Lmin subject to μ_d,t^Dmin + μ_d,t^Dmax + λ_t - ∑_l ∈𝒩_L H_d,l^D( μ_l,t^Lmin, μ_l,t^Lmax) = b_d,t^D↔ ( P_d,t^D ), ∀ d ∈𝒩_D, ∀ t ∈ℕ, μ_d,t^Dmin≤ 0, μ_d,t^Dmax≥ 0, ∀ d ∈𝒩_D, ∀ t ∈ℕ, μ_l,t^Lmin≤ 0, μ_l,t^Lmax≥ 0, ∀ l ∈𝒩_L, ∀ t ∈ℕ. If we assume P_g,t^G is a parameter, then the dual problem is a Linear Programming (LP) problem, and U_t^DP and U_t^PP are equal at each time step. Thus, if we consider the equations of the primal problem of maximizing total consumer utility, i.e., (<ref>)-(<ref>), with the equations of the dual problem, i.e., (<ref>)-(<ref>), plus the zero duality gap equations, i.e., U_t^PP = U_t^DP, we would have a set of equations leading to a solution for maximizing total consumer utility with its Lagrange multiplier. Based on the fact that P_g,t^G is a variable in our problem, this set of equations is non-linear. This system of equations can be linearized by removing the strong-duality equations and considering the linearized complementary slackness condition with binary variables as defined by (<ref>)-(<ref>), where γ is an arbitrary positive number which is greater than one. In practice, γ equal to two has an appropriate performance with respect to getting a solution in a reasonable time. P_d,t^D≤γ P_d,t^Dmax( 1-z_d,t^Dmin), ∀ d ∈𝒩_D, ∀ t ∈ℕ, μ_d,t^Dmin≥ - γ b_d,t^D z_d,t^Dmin, ∀ d ∈𝒩_D, ∀ t ∈ℕ, P_d,t^D≥ P_d,t^Dmax - γ P_d,t^Dmax( 1-z_d,t^Dmax), ∀ d ∈𝒩_D, ∀ t ∈ℕ, μ_d,t^Dmax≤γ b_d,t^D z_d,t^Dmax, ∀ d ∈𝒩_D, ∀ t ∈ℕ, ∑_g ∈𝒩_G H_g,l^G P_g,t^G - ∑_d ∈𝒩_D H_d,l^D P_d,t^D≤ -F_l^max + ( γ + 1 ) F_l^max( 1-z_l,t^Lmin), ∀ l ∈𝒩_L, ∀ t ∈ℕ, μ_l,t^Lmin≥ - γmax_d ∈𝒩_D( b_d,t^D) z_l,t^Lmin, ∀ l ∈𝒩_L, ∀ t ∈ℕ, ∑_g ∈𝒩_G H_g,l^G P_g,t^G - ∑_d ∈𝒩_D H_d,l^D P_d,t^D≥ F_l^max - ( γ + 1 ) F_l^max( 1-z_l,t^Lmax), ∀ l ∈𝒩_L, ∀ t ∈ℕ, μ_l,t^Lmax≤γmax_d ∈𝒩_D( b_d,t^D) z_l,t^Lmax, ∀ l ∈𝒩_L, ∀ t ∈ℕ, z_d,t^Dmin, z_d,t^Dmax∈{ 0,1 }, ∀ d ∈𝒩_D, ∀ t ∈ℕ, z_l,t^Lmin, z_l,t^Lmax∈{ 0,1 }, ∀ l ∈𝒩_L, ∀ t ∈ℕ. By adding (<ref>)-(<ref>) to the competitive electricity markets or optimal electricity markets, we can obtain a Mixed Integer Linear Programming (MILP) problem, which enables us to directly compute the Lagrange multiplier of competitive electricity markets or optimal electricity markets. According to the value of the Lagrange multiplier, the electricity price at each node, each generator node, and each demand node is represented by (<ref>), (<ref>), and (<ref>), respectively. Moreover, H_n,l^N shows the PTDF associated with the power changes in bus n. Pr_n,t^N = λ_t - ∑_l ∈𝒩_L H_n,l^N( μ_l,t^Lmin + μ_l,t^Lmax), ∀ n ∈𝒩_N, ∀ t ∈ℕ, Pr_g,t^G = λ_t - ∑_l ∈𝒩_L H_g,l^G( μ_l,t^Lmin + μ_l,t^Lmax), ∀ g ∈𝒩_G, ∀ t ∈ℕ, Pr_d,t^D = λ_t - ∑_l ∈𝒩_L H_d,l^D( μ_l,t^Lmin + μ_l,t^Lmax), ∀ d ∈𝒩_D, ∀ t ∈ℕ. Now, the capacity investment problem, i.e., optimal investment and strategic investment, will be presented using the model we developed earlier. The optimal capacity investment problem is represented by maximizing the objective functions as shown by (<ref>) subject to (<ref>)-(<ref>), (<ref>)-(<ref>), and (<ref>). In (<ref>), a limit on the capacity investment for generator g is shown by P_g^Δ Gmax. ∑_t ∈ℕ( ∑_d ∈𝒩_D b_d,t^D P_d,t^D - ∑_g ∈𝒩_G( b_g,t^G + e_g,t^G) P_g,t^G) - ∑_g ∈𝒩_G c_g^GcapΔ P_g^Gmax 0 ≤Δ P_g^Gmax≤ P_g^Δ Gmax↔ ( μ_g^Δ Gmin, μ_g^Δ Gmax ), ∀ g ∈𝒩_G On the other hand, the strategic investment problem is a bi-level optimization problem with maximizing the objective function represented by (<ref>) subject to (<ref>), (<ref>)-(<ref>), (<ref>)-(<ref>), and (<ref>)-(<ref>). Δ P_g^Gmax = 0 ≤Δ P_g^Gmax≤ P_g^Δ Gmaxargmax[ ∑_t ∈ℕ( Pr_g,t^G - b_g,t^G - e_g,t^G) P_g,t^G - c_g^GcapΔ P_g^Gmax], ∀ g ∈𝒩_G. In order to have a single-level problem, we need to transform the optimization problem with the objective function (<ref>) and constraints (<ref>) and (<ref>) into a system of equations using a similar approach as before. Because Pr_g,t^G and P_g,t^G are not decision variables for the problem of maximizing the profit of generators, they are determined after knowing the amount of capacity investment by solving the optimal markets problem. Thus, the dual problem of maximizing the profit of generators is represented by minimizing (<ref>) subject to (<ref>)-(<ref>). min∑_t ∈ℕ( P_g^Gmax - P_g,t^G) μ_g,t^Gmax + P_g^Δ Gmaxμ_g^Δ Gmaxμ_g^Δ Gmin + μ_g^Δ Gmax - ∑_t ∈ℕμ_g,t^Gmax = -c_g^Gcap↔ ( Δ P_g^Gmax ), ∀ g ∈𝒩_G, μ_g^Δ Gmin≤ 0, μ_g^Δ Gmax≥ 0, ∀ g ∈𝒩_G, μ_g,t^Gmax≥ 0, ∀ g ∈𝒩_G, ∀ t ∈ℕ. We could similarly derive a system of equations that gives us the solution for maximizing the profit of generators in capacity investment by considering the linearized complementary slackness equations represented by (<ref>) to (<ref>). As a result of these linearizations, the strategic investment problem has the objective function represented by (<ref>) subject to constraints comprising (<ref>), (<ref>)-(<ref>), (<ref>)-(<ref>), (<ref>), and (<ref>)-(<ref>). Δ P_g^Gmax≤γ P_g^Gmax( 1-z_d,t^Δ Gmin), ∀ g ∈𝒩_G, Δ P_g^Gmax≤γ P_g^Gmax( 1-z_d,t^Δ Gmin), ∀ g ∈𝒩_G, mu_g^Δ Gmin≥ - γ c_g^Gcap z_d,t^Δ Gmin, ∀ g ∈𝒩_G, μ_g^Δ Gmax≤γ c_g^Gcap z_d,t^Δ Gmax, ∀ g ∈𝒩_G, P_g,t^G - P_g^Gmax - Δ P_g^Gmax≥ - γ P_g^Gmax( 1-z_g,t^Gmax), ∀ g ∈𝒩_G, ∀ t ∈ℕ, μ_g,t^Gmax≤γ c_g^Gcap z_d,t^Gmax, ∀ g ∈𝒩_G, ∀ t ∈ℕ.
http://arxiv.org/abs/2407.03112v1
20240703135604
A Data Model and Predicate Logic for Trajectory Data (Extended Version)
[ "Johann Bornholdt", "Theodoros Chondrogiannis", "Michael Grossniklaus" ]
cs.DB
[ "cs.DB" ]
J. Bornholdt et al. University of Konstanz, 78457 Konstanz, Germany {firstname.lastname}@uni-konstanz.de A Data Model and Predicate Logic for Trajectory Data (Extended Version) Johann Bornholdt0000-0001-6183-1500 Theodoros Chondrogiannis0000-0002-9623-9133 Michael Grossniklaus0000-0003-1609-2221 Received XXX; accepted YYY =========================================================================================================================== § ABSTRACT With recent sensor and tracking technology advances, the volume of available trajectory data is steadily increasing. Consequently, managing and analyzing trajectory data has seen significant interest from the research community. The challenges presented by trajectory data arise from their spatio-temporal nature as well as the uncertainty regarding locations between sampled points. In this paper, we present a data model that treats trajectories as first-class citizens, thus fully capturing their spatio-temporal properties. We also introduce a predicate logic that enable query processing under different uncertainty assumptions. Finally, we show that our predicate logic is expressive enough to capture all spatial and temporal relations put forward by previous work. § INTRODUCTION A growing number of applications ranging from rating and publishing personal hiking trips <cit.> to studying the migration of animals require the analysis of trajectory data. Consequently, the efficient processing of trajectory data has attracted significant interest <cit.>. For example, at the Centre for the Advanced Study of Collective Behaviour[<https://www.exc.uni-konstanz.de/collective-behaviour/>] at the University of Konstanz, the excellence cluster in which the presented research is situated, we are building the so-called Imaging Hangar, which enables us to study small animal collectives in a controlled environment using trajectory data obtained from video image analysis <cit.>. Depending on the kind of object being tracked, the data recorded together with a trajectory is highly application-specific. Furthermore, the quality of the trajectory data can vary substantially based on sampling rate and sensor accuracy. Trajectory data are uncertain by nature. Specifically, at least two types of uncertainty can be distinguished. The first type comes from noise in GPS measurements and is inherent to the data source, making it impractical to address it at the system level. The second type concerns the position of an object between two consecutive trajectory points and is the focus of the work presented in this paper. Due to the discrete sampling rate with which locations are obtained, there is uncertainty as to the exact movement of an object at every point in time. For example, Figure <ref> shows the trajectory of a bird T and a query region R. Given a straight line between points p_1 and p_2, T intersects R. However, the bird could have actually moved around the corner of R, shown as a dashed line. Existing systems like SECONDO <cit.> and MobilityDB <cit.> come with two major shortcomings. First, they assume that all necessary trajectories can be collected and stored in a single location. Such a case is not always possible as exchanging trajectory data obtained from different sources has both practical and legal limitations. Second, to deal with uncertainty, existing approaches either model trajectories using cylinders <cit.> and beads <cit.>, or attempt to process queries by inferring the exact location of the moving object between two recorded locations <cit.>. However, existing systems do not take the uncertain nature of trajectory data into account. To address these shortcomings, our aim is to develop a query broker that enables users such as biologists and environmental scientists to query trajectory data from multiple sources through a unified interface. As a first step, this paper proposes a spatio-temporal predicate logic, including a data model and operators, to query trajectories from different sources. In particular, our predicate logic that accommodates uncertainty in trajectory data by supporting different levels of strictness. The contributions of this paper are as follows. * We introduce a data model for trajectories based on the NF2 relational data model. Our model gives equal importance to the spatial and temporal attributes of trajectories while also supporting their application-specific attributes (Section <ref>). * We define a unified spatio-temporal predicate logic to express selection operations over trajectory data. As a distinguishing feature, our predicate logic supports different levels of strictness to deal with uncertainty in interpreting trajectory data (Section <ref>). * We demonstrate that our spatio-temporal predicate logic is expressive enough to represent the spatial relations from the DE-9IM standard <cit.> and the temporal relations from Allen's Interval Algebra <cit.>, and we show how our logic handles the uncertainty of trajectory data in a query (Section <ref>). Section <ref> provides an overview of existing works. Concluding remarks and directions for future work are given in Section <ref>. § RELATED WORK In this section, we provide an overview of existing data models, algebras, and systems that have been proposed to store and query trajectory data. *Data Models and Algebras There has been a variety of contributions in the field of data models and algebras for trajectories and moving objects <cit.>. Güting et al. <cit.> provide a foundational framework for representing and querying moving objects, which serves as a cornerstone in the trajectory data management domain. Frihida et al. <cit.> introduce an algebraic spatio-semporal trajectory data type for the representation of trajectory data. Building on the approach of Frihida et al., Zheni et al. <cit.> introduce a semantic-based model and manipulation language for trajectories. A contribution by Ferreira et al. <cit.> presents an algebra for trajectories by incorporating time series and coverage. Bakli et al. <cit.> propose an algebra on operators based on the the Hadoop system. In contrast to our work, these contributions do not deal with the uncertainty between the sampled points of trajectories in the data model or the algebra. *Systems Several works have contributed to the field of moving object data management <cit.>. Notably, DEDALE <cit.> is an early system that laid the groundwork for representing and querying moving objects. DEDALE serves as a spatial extension for SQL, lacking a temporal component. SECONDO <cit.> is a research prototype that implements a subset of the foundational framework proposed by Güting et al. <cit.>. The HERMES trajectory database engine <cit.> extends the object-relational data model by introducing data types and DDL extensions for managing trajectory data. MobilityDB <cit.> is an extension to PostGIS that supports moving object data providing trajectory-specific data types and functions that implement the DE-9IM relations to a certain degree. UlTraMan <cit.> extends Apache Spark offering a holistic solution for the entire trajectory pipeline, including range query processing. Moreover, while most of the aforementioned systems focus primarily on the spatial dimension of trajectory data, time-series database systems <cit.> also support storing and querying trajectory data, focusing primarily on the temporal dimension. In contrast to our work, most of these systems do not come with a formally defined data model, but instead focus on the technical challenges of trajectory data management. Furthermore, existing systems do not take into account the uncertain aspects of trajectory data. § A DATA MODEL FOR TRAJECTORIES Trajectories represent the movement of a moving object. Typically, trajectories are given as a sequence of tuples, that consist of geometric or geographical coordinates accompanied by timestamps. The timestamp attributes of the tuples of trajectories are strictly ordered and monotonically increasing. Since most currently available datasets are two-dimensional, we focus on two spatial dimensions. Definition <ref> defines a trajectory in a relational context. Let trajectory relation T be a relation with schema sch(T) = (o, x, y, τ) that satisfies the following: * [t] val(T) = { tp | (tp(o), tp(x), tp(y), tp(τ)) = ⟨ (0, x_1, y_1, τ_1),…, (n-1, x_n, y_n, τ_n) ⟩}, n ∈N * o is the order of each tuple tp ∈ T. * x and y are spatial coordinates (geometrical or geographical). * τ are timestamps. * ∀ tp_i, tp_j ∈ T, i ≠ j it stands that tp_i(o)> tp_j(o) ⇔ tp_i(τ) > tp_j(τ). In the trajectory relation, we include the order column. While the timestamp could also be used to determine the correct sequence of the tuples in the relation, the order facilitates specific operations, e.g., the retrieval of line segments between consecutive points. Figure <ref> shows T in a relational table. It is helpful to store trajectories in relational tables because RDBMS offer a multitude of operators that can be used to run queries on the trajectory data. §.§ Trajectory Representation in NF2 The Non-First Normal Form (NF2) data model is an extension of the relational data model. The corresponding NF2 algebra allows subexpressions as predicates and enables the access of nested relations. Schek and Scholl <cit.> provide a detailed description of the algebra. In the context of our work, the NF2 data model enables the modeling of trajectories as nested relations, thus treating trajectories as first-class citizens. More specifically, using the NF2 data model we store single trajectories T as nested relations of a trajectories relation 𝔗, i.e, 𝔗(tid,T(order,x,y,τ)) Figure <ref> shows an example of a trajectories relation with a single nested relation representing trajectory T_0. §.§ Data Point and Trajectory Properties In order to represent additional properties of a trajectory, we can add a relation with properties referencing the trajectory relation. For properties that apply to entire trajectories, we add a column to the properties relation for each property. Trajectory properties can be added with two different scopes: * trajectory properties, for properties on entire trajectories. * point properties, for properties on trajectory points. Figure <ref> shows a trajectory property relation with one example column for each property type. A trajectory property is shown in the species column. It contains the type of animal, that was tracked for this trajectory. As a trajectory property, its value applies to the entire trajectory. A point property enables the storage of a specific property associated with a single point of the trajectory. The column storing point properties contains a nested relation with the order of the point and the corresponding property. It is important for the consistency of the data model, to relate it to the order instead of the timestamp, to enforce that properties always correspond to specific points of the trajectory. Point properties can be helpful for properties that only apply to a few points. For example, in the properties relation shown in Figure <ref>, the movement type column contains information about how the bird is moving, e.g., flying or walking. §.§ Segment Property Uncertainty An inherent problem when representing the movement of an object using trajectories is that observations can only be captured at distinct timestamps. However, the actual movement of an object is continuous. As such, when processing a query, we cannot assume that any point properties also apply to locations on the segments between two consecutive points. Figure <ref> shows a segment interpretation of the point properties in Figure <ref>. Between points 0 and 1, i.e., between timestamps 110 and 120, the movement type property has the value “walking” and between points 2 and 4, i.e., timestamps 130 and 150, the value “flying”. Hence, whether the value “walking,” applies for timestamp 115 depends on the semantics of the point property. Even so, no assumption can be made about the value of the property, e.g., for the timestamp 125, i.e., it is uncertain at which point in time the value of the property changes. Due to this uncertainty, we do not consider this type of property in the data model. Instead, we consider such properties only in the context of a query and we propose a strictness option in Section <ref> to deal with the uncertainty. § SPATIO-TEMPORAL PREDICATE LOGIC The selection operator is an essential query operator in database systems that filters tuples based on a given predicate. In this section, we introduce the spatio-temporal selection σ^ST operator that performs a range selection over a set of trajectories on the spatial and temporal dimensions. The operator applies a spatio-temporal predicate P on a relation of trajectories 𝔗. The result is a subset of the trajectories in 𝔗 which satisfy P. Given a trajectories relation 𝔗, and a spatio-temporal predicate P, the spatio-temporal trajectory selection σ^ST returns a relation that contains every tuple tp in 𝔗 for which the trajectory relation tp.T satisfies P, i.e., sch(σ^ST_P(𝔗)) = sch(𝔗) val(σ^ST_P(𝔗)) = { tp : tp ∈𝔗∧ P holds for tp.T } To express spatio-temporal predicates, we have designed a predicate notation that works equally for the spatial and temporal dimensions. Our notation can be used to define specific conditions on the points of trajectories. When using a predicate in a selection operator, all of these conditions must be satisfied for a trajectory to be in the result set. Additionally, multiple predicates can be combined in conjunctive normal form. For example, let a query Q be: “find all trajectories in an region R during a time interval I”. Figures <ref> and <ref> show a visualization of the spatial and temporal portion of the query, respectively. In the following subsections, we introduce our predicate logic and show how it can independently solve the spatial and temporal parts of Q. §.§ Spatial Predicates The spatial part of a predicate is used to express a two-dimensional range query. The predicates are applied to trajectories on a point level. The following objects can be used in spatial predicates: * T: The trajectory relation that contains all points in the trajectory, * R: A geometric object, * {p_f, p_l}∈ T: The first and last point of the trajectory, * T_fl = T ∖{p_f, p_l}: All points of the trajectory except the first and the last. Without loss of generality, we consider the case of spatial range queries, where R represents a region. While on a logical level, R can be any two-dimensional shape, the nature of the physical implementation of the predicates can affect the supported shapes of R. Furthermore, the calculation of the predicates changes depending on whether R is, for example, a rectangle or a polygon. Since the most common spatial range queries are bounding boxes, we focus on rectangle-shaped regions in the upcoming examples. The relationship between points of a trajectory T and R can be expressed with the following operators: contained (⊑), properly contained (⊏), and not contained (⊏̸). The contained operator asserts that one or several points must be inside R or on the border of R. It can be used to model the spatial portion of query Q shown in Figure <ref> by asserting that at least one point in T must be contained in R with the predicate: ∃ p ∈ T: p ⊑ R In order to ensure that the entire trajectory T is contained in (or on the border of) R, a predicate can be written as: ∀ p ∈ T: p ⊑ R The properly contained operator functions similarly to contained but excludes points that lie on the border of R. In addition to expressing that points are properly contained in R, properly contained can also be used in conjunction with contained to express that points are on the border of R: ∀ p ∈ T: p ⊑ R ∧( p ⊏ R ) Lastly, the not contained operator enables the construction of predicates for points that are disjoint from R: ∀ p ∈ T : p ⊏̸R For some complex spatial relations, it is necessary to address a trajectory's first or last parts specifically. To achieve this, we introduce two unique points, p_f and p_l, denoting the trajectory's first and last points T, respectively. These special points can be used, for example, to express that the beginning of T is contained in R, but the end of T is not: p_f ⊏ R ∧ p_l ⊏̸R To enhance the conciseness of the formulas, we utilize the subset T_fl of T, which includes all points except p_f and p_l. Using T_fl, a predicate for a trajectory that starts and ends inside R, but has points outside of R is expressed as: p_f, p_l ⊏ R ∧∃ p ∈ T_fl: p ⊏̸R §.§ Temporal Predicates The operators introduced in Section <ref> (⊏, ⊑, ⊏̸) for spatial predicates can also be used to express the relationship between trajectories points and a time interval I. In contrast to the two-dimensional region R, I is one-dimensional, which makes the start and endpoints of I its border. Figure <ref> shows the temporal part of query Q, where the border points of I are at timestamps 1 and 4. The temporal part of Q can be expressed with the introduced operators: ∃ p ∈ T: p ⊑ I To effectively express temporal relations, we introduce two additional operators: p is before I (p < I), and p is after I (p > I). The operator p < I expresses that point p is earlier on the time axis than the start of the interval I, for p > I, p is after the end point of I. Therefore, both operators express that p is outside of I. For example, a predicate for a trajectory that completely overlaps interval I can be expressed as: p_f < I ∧ p_l > I §.§ Combining Spatial and Temporal Predicates One strength of our predicate logic is its ability to combine spatial and temporal predicates seamlessly. Users can express complex spatio-temporal queries involving the geometric characteristics of trajectories and their temporal evolution. When looking at the example query q, we can now express the spatial and temporal parts of the query. However, suppose we express both independently from each other. In that case, we can see in Figures <ref> and <ref> that T_a intersects both spatial region R, and interval I, but not in the same points. To properly express Q, we need to define a predicate where a single point p of trajectory T is both in R and I: ∃ p ∈ T: p ⊑ R ∧ p ⊑ I In our predicate logic, spatio-temporal predicates can be expressed easily because the same operators can be used on both the spatial and the temporal dimensions. §.§ Selection Uncertainty As discussed in Section <ref>, the segments between individual points are not known in trajectories. For the spatio-temporal selection, these unknown segments pose a problem because the predicate operators introduced in Sections <ref> and <ref> are applied on sets of points. However, when using the concrete points of trajectories, we are not examining the segment between points. Figure <ref> shows a bird's trajectory T and a query region R. To check whether a trajectory T intersects with a region R, assume a predicate Q_1 = ∃ p ∈ T: p ⊏ R, which checks if T has a point which is contained in R. On a point-by-point evaluation, T does not satisfy q_1 because all points of T lie outside of R, even though we can see in Figure <ref> that the interpolation of T intersects R. The same uncertainty also exists in the inverse query, when checking whether T does not intersect R with the predicate Q_2 = ∀ p ∈ T: p ⊏̸R. To tackle this uncertainty in the spatio-temporal predicates, we propose a strictness parameter, added to the predicates, to define how ambiguous trajectories will be treated for each predicate. We identify three degrees of strictness: * The strict evaluation of predicates considers only the points of the trajectory. In the example above, the strict evaluation of Q_1 does not match T, while Q_2 does match T. * The relaxed evaluation of predicates assumes that there are infinitely many intermediate points on the straight line segments between pairs of consecutive points in trajectories, thereby assuring that all intersections between T and R are considered. In the example above, the relaxed evaluation of Q_1 matches T, while Q_2 does not match T. * The approximated evaluation allows users to inject custom behavior into the predicates for cases where the strict and relaxed evaluations are not sufficient, e.g., for restricted trajectories. With the strictness parameter, the user can define in a query whether the trajectory should be evaluated as a set of points (strict), as a continuous movement (relaxed), or by using some user-define assumption about the movement of the object (approximated). As investigating multiple assumptions for the approximated evaluation is out of the scope of our paper, we focus on strict and relaxed evaluation. This strictness parameter applies for spatial predicates as well as for the segment properties described in Section <ref>. § PROOF OF CONCEPT In this section, we demonstrate the completeness of our predicate operators, introduced in Section <ref> for spatio-temporal range queries by establishing their equivalence and compatibility with established relationship models. Regarding spatial relations, we consider the Dimensionally Extended Nine-Intersection Model (DE-9IM) <cit.>, a widely utilized topological model to define and reason about spatial relationships between geometric shapes. The model defines nine intersection patterns regarding the interior, boundary, and exterior between two geometric objects in two dimensions to characterize their spatial relation. Note that in case one of the geometric objects as a linestring, e.g., the spatial component of a trajectory, the interior is the linestring itself, and the boundary is empty. While the DE-9IM consists of 6,561 distinct relations between pairs of shapes, Zlatanova et al. <cit.> demonstrate that only 19 relationship types are necessary to model all possible relationships between polygons and linestrings. Since in our problem setting spatial range queries are relationships between polygons and linestrings, our predicate logic can be considered complete because it can express the 19 relations mentioned above. Figure <ref> demonstrates, with five examples, how DE-9IM relationships can be expressed as predicates. Regarding temporal relations, we focus on Allen's Interval Algebra <cit.>, which provides a formal framework for representing and reasoning about temporal intervals. The algebra defines thirteen possible binary relations between time intervals. With our temporal predicates it is possible to express all interval relations defined in Allen's Interval algebra. Figure <ref> shows three examples of interval relations expressed as predicates. A complete list of Allen's Interval Algebra, and DE-9IM relationships along with the corresponding predicates, can be found in Tables <ref> and <ref>. As a proof of concept, we consider the spatio-temporal range query and we show how we can express the spatial and temporal predicates in the NF2 algebra. Given a trajectories relation 𝔗(tid,T(order,x,y,τ)) a spatial region R, a time interval I, a spatial predicate P_s, and a temporal predicate P_τ, a spatio-temporal range query returns all tuples tp ∈𝔗 such that the sequence of x-y coordinates of tp.T and R satisfy P_s, and the sequence of timestamps τ of tp.T and I satisfy P_τ. For instance, the relation containing the set of x-y coordinates of a trajectory with tid=1 is π[x,y](μ_T(σ[tid=1](π[tid,π[x,y](T)](𝔗)))). §.§.§ Spatial Trajectory Selection in NF2 We begin by showing how NF2 algebra can be used to answer queries involving spatial predicates. Note that relationships which involve points or lines lying on the border of a query region are not very useful in practice. Hence, due to the limited space, we focus on the five relationships that do not involve points or lines on the border of the query region, i.e., R031, R179, R223, R247, and R255[The DE-9IM relation numbers are based on Zlatanova et al. <cit.>]. Without loss of generality, we consider the query region to be a rectangle R = (x_min,y_min, x_max, y_max). We begin with relations R179, R247, and R255. For these relations, the algebraic expressions for both strict and relaxed evaluation are the same. Wherever necessary in the following examples we have: P_first = σ[order = 1](T) P_last = σ[order = max(π[order](T'))](T) Given a trajectories relation 𝔗 and a query rectangle R, the relationship R179 returns all trajectories/tuples tp ∈𝔗 such that the trajectory T lies completely inside R. The NF2 algebra expression for this query is σ^ST_R179, R = σ[ x_min<min(π[x](T)) y_min<min(π[y](T)) x_max>max(π[x](T)) y_max>max(π[y](T)) ](𝔗) Given a trajectories relation 𝔗 and a query rectangle R, the relationship R247 returns all trajectories tp ∈𝔗 the starting point p_f and the ending point p_l lie completely inside R and there exists at least one point p of t that lies outside R. The NF2 algebra expression for this query is σ^ST_R247, R = σ[π[x](P_first)>x_minπ[x](P_first)<x_max π[y](P_first)>y_minπ[y](P_first)<y_max π[x](P_last)>x_minπ[x](P_last)<x_max π[y](P_last)>y_minπ[y](P_last)<y_max ( min(π[x](T))<x_minmax(π[x](T))>x_max min(π[y](T))<y_minmax(π[y](T))>y_max) ](𝔗) Given a trajectories relation 𝔗 and a query rectangle R, the relationship R255 returns all trajectories tp ∈𝔗 the starting point p_f of which lies inside R and the ending point p_l of which lies outside R. The NF2 algebra expression for this query is σ^ST_R255, R = σ[π[x](P_first)>x_minπ[x](P_first)<x_max π[y](P_first)>y_minπ[y](P_first)<y_max ( min(π[x](P_last))<x_minmax(π[x](P_last))>x_max min(π[y](P_last))<y_minmax(π[y](P_last))>y_max) ](𝔗) We now focus on relations R031, and R223. As shown in Figure <ref>, in order to apply the relaxed evaluation of these predicates, one must check every line segment form by consecutive trajectory points. In order to enable the examination of trajectory line segments, we define the following relation that we use in the subsequence examples: T_sgmt = π[T.order, T.x, T.y, T'.x, T'.y] (T ⋈_T.order+1 = T'.order T') Given a trajectories relation 𝔗 and a query rectangle R, the relationship R031 returns all trajectories tp ∈𝔗 that lie completely outside R. For the strict evaluation, it is sufficient to check that all points of each T lie outside R. As such, the NF2 algebra expression for this query is σ^ST_R031, R = σ[count(σ[x_min<x y_min<y x_max>x y_max>y](T)) = 0 ](𝔗) For the relaxed evaluation, we also need to check that none of the segments of T in T_sgmt intersect R. Given a trajectories relation 𝔗 and a query rectangle R, the relationship R223 returns all trajectories tp ∈𝔗 the starting point p_f and the ending point p_l of the associated trajectory T that lie outside R and T intersects R. For the strict evaluation, it is sufficient to check that p_f and p_l lie outside R and there is at least one point of T that lies inside R. As such, the NF2 algebra expression for this query is σ^ST_R223, R = σ[( π[x](P_first)<x_minπ[y](P_first)<y_min π[x](P_first)>x_maxπ[y](P_first)>y_max) ( π[x](P_last)<x_minπ[y](P_last)<y_min π[x](P_last)>x_maxπ[y](P_last)>y_max) count(σ[x>x_min y>y_min x<x_max y<y_max](T))>0 ](𝔗) For the relaxed evaluation, in the case where count = 0 we need to check whether at least one of the segments in T_sgmt intersects R. §.§.§ Temporal Trajectory Selection in NF2 We now show how the NF2 algebra can be used to answer queries involving temporal predicates. Due to the limited space, similar to the spatial trajectory selection, we focus on relationships that do not consider points lying at the start or the end of a given interval. Hence, we focus on the relationships precedes, overlaps with, and is during. Without loss of generality, we consider the query interval I = (τ_s,τ_e). Given a trajectories relation 𝔗 and a query interval I, the precedes relationship returns all trajectories tp ∈𝔗 with a last point that has a timestamp before the starting timestamp τ_s of the interval I. The NF2 algebra expression for this query is σ^ST_precedes, I = σ[ max(π[τ](T))<τ_s](𝔗). Given a trajectories relation 𝔗 and a query interval I, the overlaps with relationship returns all trajectories tp ∈𝔗 for which the timestamp of the first point is before the start τ_s of the interval I, and the timestamp of the last point is after τ_s but before the end τ_e of the interval I. The NF2 algebra expression for this query is σ^ST_overlaps with, I = σ[ min(π[τ](T))<τ_smax(π[τ](T))>τ_smin(π[τ](T))<τ_e](𝔗). Given a trajectories relation 𝔗 and a query interval I, the is during relationship returns all trajectories tp ∈𝔗 for which both the timestamp of the first point and the timestamp of the last point are after the start τ_s of the interval I and before the end τ_e of I. The NF2 algebra expression for this query is σ^ST_overlaps with, I = σ[ min(π[τ](T))>τ_smax(π[τ](T))<τ_e ](𝔗). In a similar fashion, we can define the mirrored relationships is preceded by, is overlapped by, and contains. Note that for the temporal dimension, there is no difference between the strict and the relaxed evaluation since we only need to consider the start and the end timestamp of a trajectory. § CONCLUSION This paper proposes a formal data model and predicate logic for unified trajectory data management. Introducing a novel data model rooted in the NF2 relational data model, we merge spatial, temporal, and application-specific attributes of trajectories. Our unified spatio-temporal predicate logic handles uncertainty in sampled trajectory data by accommodating varying levels of strictness. Regarding future work, this paper lays the formal foundation for a query broker for trajectory data. Additionally, we aim to design a specialized query optimization framework to improve the broker's efficiency and scalability in handling complex trajectory queries, paving the way for enhanced trajectory data management and analysis capabilities. § ACKNOWLEDGMENT This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy – EXC 2117 – 422037984. splncs04 § APPENDIX Table <ref> illustrates all relationships defined in Allen's interval algebra, while Table <ref> illustrates the full list of DE-9IM line-to-area relationships.
http://arxiv.org/abs/2407.02578v1
20240702180145
A proof of Onsager's Conjecture for the SQG equation
[ "Shi-Zhuo Looi", "Philip Isett" ]
math.AP
[ "math.AP", "math-ph", "math.MP" ]
Properties of core-EP matrices and binary relationships Dedicated to Professor Chi-Kwong Li on His 65th Birthday Received: date / Accepted: date =========================================================================================================================== § ABSTRACT We construct solutions to the SQG equation that fail to conserve the Hamiltonian while having the maximal allowable regularity for this property to hold. This result solves the generalized Onsager conjecture on the threshold regularity for Hamiltonian conservation for SQG. § INTRODUCTION In this paper we are concerned with the surface quasi-geostrophic equation (SQG equation), which arises as an important model equation in geophysical fluid dynamics that has applications to both oceanic and meteorological flows <cit.>. The SQG equation for an unknown scalar field þ on a two-dimensional spatial domain has the form _t þ+ ·( þu ) = 0, u = T[þ] = ||^-1 ^⊥þ, where || = √(-). SQG is an active scalar equation, so called since the velocity field advecting the scalar field depends at every time on the values of the scalar field. The field þ can represent either the temperature or surface buoyancy in a certain regime of stratisfied flow. The equation has been studied extensively in the mathematical literature due to its close analogy with the 3D incompressible Euler equations and the problem of blowup for initially classical solutions, which remains open as it does for the Euler equations. A survey of mathematical developments is given in the introduction to <cit.>. For more recent mathematical works on SQG we refer to <cit.> and the references therein. Fundamental to the study of the SQG equation are the following basic conservation laws: * For all sufficiently smooth solutions, the Hamiltonian 12∫_^2 (||^-1/2þ(t,x))^2 dx remains constant. * For all sufficiently smooth solutions, the L^p norms þ(t) _L^p(^2) remain constant 1 ≤ p ≤∞, as do the integrals ∫ F(þ(t,x)) dx for any smooth function F. * For all weak solutions to SQG, the mean, impulse, and angular momentum defined respectively by M = ∫þ(t,x) dx, I⃗ = ∫_^2 x þ(t,x) dx, A = ∫_^2 |x|^2 þ(t,x) dx are conserved quantities. On the torus ^2, the mean is well-defined and conserved. (To prove (i), multiply the equation by ||^-1þ and integrate by parts. To prove (ii), use · u = 0 to check that F(þ) satisfies _t F(þ) + ·( F(þ) u ) = 0. See <cit.> for a proof of (iii).) Note that in contrast to (iii), the nonlinear laws (i) and (ii) require that the solution is “sufficiently smooth”. If one expects that turbulent SQG solutions have a dual energy cascade as in the Batchelor- Kraichnan predictions of 2D turbulence <cit.>, then one has motivation to consider weak solutions that are not smooth. A basic question for the SQG equation is then: What is the minimal amount of smoothness required for the conservation laws to hold? This question is exactly the concern of the (generalized) Onsager conjectures for the SQG equation. A closely related open problem is to find the minimal regularity required to imply uniqueness of solutions. Using Hölder spaces to measure regularity, the Onsager conjectures can be stated as follows * If þ∈ C^0, then conservation of the Hamiltonian holds. However, for any < 1/2, there exist solutions with ||^-1/2þ∈ L_t^∞ C^ that do not conserve the Hamiltonian. * If > 1/3 then the integral ∫ F(þ(t,x)) dx is conserved for any smooth function F. If < 1/3, there exist solutions in þ∈ L_t^∞ C^ that violate this conservation law. The contribution of this paper is to fully answer the first conjecture in the affirmative. Some remarks about these problems are in order: * These problems generalize the original Onsager conjecture <cit.>, which concerned turbulent dissipation in the incompressible Euler equations and stated that the Hölder exponent 1/3 should mark the threshold regularity for conservation of energy for solutions to the incompressible Euler equations. See <cit.> for discussions of the significance of Onsager’s conjecture in turbulence theory. * The threshold exponents are derived from the fact that the conservation law for sufficiently regular solutions has been proven in both cases (i) and (ii). Namely, <cit.> proves conservation of the Hamiltonian for solutions with þ∈ L^3(I ×^2), while <cit.> proves the conservation law (ii) for > 1/3. The proofs are variants of the kinematic argument of <cit.>, which proved energy conservation for the Euler equations above Onsager’s conjectured threshold. For Hamiltonian conservation in the nonperiodic case, see <cit.>. * Following the seminal work <cit.>, advances in the method of convex integration have made possible the pursuit of Onsager's conjecture both for the Euler equations and more general fluid equations. In particular, Onsager's conjecture for the 3D Euler equations has been proven in <cit.> (see also <cit.>), while the first progress towards the Onsager conjecture (i) for SQG was made in <cit.>, with an alternative approach given in <cit.>. See <cit.> for surveys and <cit.> for a discussion of generalized Onsager conjectures. * To make sense of the Onsager problem for the Hamiltonian, it must be noted that the SQG equation is well-defined for þ having negative regularity. Namely, for any smooth vector field ϕ(x) on ^2, the quadratic form Q(ϕ, þ) = ∫ϕT[þ] ·þdx dt , initially defined for Schwartz þ with compact frequency support away from the origin, has a unique bounded extension to þ∈Ḣ^-1/2. This fact, which relies on the anti-symmetry of the operator T, allows the SQG nonlinearity to be well-defined in ' for þ of class þ∈ L_t^2 Ḣ^-1/2. In fact, one has the following estimate, which is optimal: |Q(ϕ, þ)| ≲ ^2_j,ℓ ϕ_L^∞ þ_Ḣ^-1/2^2 where ^2_jℓϕ = _j _ℓϕ - 12_jℓϕ is the trace-free part of the Hessian of ϕ. See <cit.> for a proof of this bound and its optimality, and <cit.> for earlier definitions of the nonlinearity with weaker estimates. The main theorems of our work are the following, which settle the Onsager conjecture on the threshold for Hamiltonian conservation for SQG. For any < 1/2, there exist weak solutions þ to SQG that do not conserve the Hamiltonian such that ||^-1/2þ∈ C_t C^. For any < 1/2 and for any C_c^∞((0,T)×^2) function f that conserves the mean, i.e. ∫_^2 f(t,x) dx = 0 for all t, there exists a sequence of SQG solutions þ_n of class ||^-1/2þ_n ∈ C_t C^ with compact support in time, such that ||^-1/2þ_n ⇀ ||^-1/2 fin L_t,x^∞ weak-*. Our h-principle result, which implies the first theorem, is inspired by the original h-Principle of Nash <cit.> on the C^0 density of C^1 isometric immersions in the space of short maps. The connection between h-principles and conservation laws was originally noted in <cit.>. See also <cit.> for a recent discussion of h-principle results in fluids. An additional reason for the interest in the h-principle theorem is that this theorem shows that the nonlinearity for SQG is not bounded in any space less regular than L_t^2 Ḣ^-1/2, even when restricted to SQG flows. Indeed, if the nonlinearity can be bounded in a space X into which the class W^-1/2, ∞≡{ f  :  ||^-1/2 f ∈ L^∞} embeds compactly, it would contradict the h-Principle result since one could show using an Aubin-Lions-Simon compactness argument and X-boundedness that weak-* limits of solutions in L_t^∞ W^-1/2,∞ would also be weak solutions to SQG. The previous best known result on this problem, due to <cit.>, achieved regularity ||^-1/2þ∈ C^3/10-, with an alternative approach given in <cit.>. Nonuniqueness of SQG steady states was proven in <cit.>. We note also the works <cit.>, which prove nonuniqueness for forced SQG up to the Onsager threshold ||^-1/2þ∈ C^1/2-. Our improvement of the exponent relies on the following ideas: * We build on the recent breakthrough solving the 2D Onsager conjecture in <cit.>, which introduces a “Newton iteration,” which takes an arbitrary Euler-Reynolds flow and perturbs the velocity field so that the error is a sum of one-dimensional pieces with disjoint temporal support plus other error terms of acceptable size. This idea builds on work of <cit.>. * The main difficulty in implementing the Newton iteration in the SQG context is to prove good estimates for a trace-free second-order[We require a second order anti-divergence since we base our approach on that of <cit.>.] anti-divergence tensor for the Newton correction. That is, a trace-free solution ρ to ρ = w, where w is the Newton correction. The straightforward estimate for the solution to this equation is ρw, which turns out to be far from adequate. We tackle this difficulty with two main ideas that take advantage of the structure of SQG: * We define a system of “transport elliptic” equations that couples the equation for w with an equation for a first-order anti-divergence z, which is then coupled to the equation for a second-order anti-divergence r (that may not be trace-free). * We use a Littlewood-Paley analysis to prove suitable estimates for r, which then are shown to imply suitable estimates for ρ. * The second main difficulty that separates the SQG scheme from 2D Euler is that certain bilinear or quadratic terms that occur naturally in both the Newton iteration and the convex integration steps need to be written in divergence form with an anti-divergence that satisfies good (dimensionally correct) estimates. Here we build on ideas of <cit.> and provide a more direct approach to achieving such divergence forms. The main idea, which we call the “divergence form principle,” traces back to an important calculation in <cit.> that was generalized and streamlined in <cit.>. See Section <ref>. * Within the Newton iteration, we use analytical ideas that we believe to be of independent interest. For example, our methods can be used to give an alternative approach to some results of <cit.>, and our commutator estimates (e.g. Lemma <ref>) can be used to give an alternative approach to the improved endpoint regularity result discussed in <cit.>. The sharp estimates we prove should be useful for obtaining an endpoint type result for SQG, similar to that of <cit.>, but currently we do not know how to remove the reliance on double-exponential frequency growth in the Newton step. While the above are the main ideas that are new to this paper, we note that they are not the only ones needed to surpass the exponent ||^-1/2þ∈ C^3/10-. In particular, we rely on some nonperturbative techniques that were already used in <cit.>, including the use of nonlinear phase functions as in <cit.>, the microlocal Lemma of <cit.> and the bilinear microlocal Lemma of <cit.>. In <cit.> it was shown that certain perturbative techniques could be used in place of the above methods, but to get the sharp exponent we require techniques that remain effective on a nonperturbative timescale. We also take advantage of an observation in <cit.>, which is that estimates on pure time derivatives for SQG can be used in place of advective derivative bounds. While this point is probably not essential to the proof, it allows for a simpler argument where one does not need to commute advective derivatives with nonlocal operators many times. Finally, we comment that during the writing of this paper we learned that <cit.> have independently and concurrently obtained another proof of Theorem <ref>. We now begin the proof with some notation. §.§ Notation In this paper, the dimension d = 2. We use vectors to indicate multi-indices and use || to indicate the order of the multi-index. For instance, if = (a_1, a_2, a_3), 1 ≤ a_i ≤ d, then _ = _a_1_a_2_a_3 is a partial derivative of order || = 3. We will use many times the following elementary counting inequality with parameters (x_1, x_2, y): (x_1 - y)_+ + (x_2 - y)_+ ≤(x_1 + x_2 - y)_+ x_1, x_2, y ≥0. We use the symbol to indicate a sum with combinatorial coefficients that we have omitted to simplify notation. For example, the product rule implies, _(fg) = __1 f __2 g where the sum runs over some but not all multi-indices with |_1| + |_2| = ||. Meanwhile, the chain rule and product rule give _G(F(x)) = ∑_m=0^|| ^m G(F(x)) ∏_j=1^m __j F, where the empty product is 1 and the sum is over certain multi-indices with |_1| + ⋯ + |_m| = ||. (To be more precise the multi-indices should be of the form _m,j, but we omit the m subscript to simplify notation.) We define Littlewood-Paley projections with the following conventions. Suppose η̂_≤ 0(ξ) is 1 on |ξ| ≤ 1/2 and 0 on 1 ≤ |ξ|, η̂_≤ 0∈ C_c^∞ (^d). For q ∈ we define P_≤q f(ξ) = η̂(ξ/2^q) f̂(ξ). Thus in physical space one has P_≤ q f = η_≤ q∗ f for η_≤ q(h) = 2^dqη_≤ 0(2^q h). We define Littlewood-Paley projections P_q f = P_≤ q+1 f - P_≤ q f so that P_q f has frequency support in { 2^q-1≤ |ξ| ≤ 2^q+1}. We use P_≈ q to indicate a Fourier multiplier that is a bump function adapted to frequencies of size ξ∼ 2^q. So for example, P_q = P_q P_≈ q. We will use the summation convention to sum over repeated indices. For example, _i u^i is the divergence of a vector field u. We will make use of two different anti-divergence operators. The first is the order -1 operator _a^jℓ, which solves _j _a^jℓ[f^a] = f^ℓ, _jℓ_a^jℓ = 0, ^jℓ_a = ^ℓj_a whenever f^ℓ is a vector field of mean zero on the torus. The second operator is the order -2 operator ^jℓ, which solves _j_ℓ^jℓ[f] = f, _jℓ^jℓ = 0, ^jℓ = ^ℓj whenever f is a scalar field of mean zero on the torus. Explicit formulas for these operators can be given in terms of the Helmholtz projection to divergence-free vector fields ^ℓ_a ≡_a^ℓ- ^-1 ^ℓ_a ^jℓ_a = ^-1(^j _a^ℓ+ ^ℓ_a^j ) - ^-1 ^jℓ _a + 2 ^-2 ^j ^ℓ_a ^jℓ = - ^-1 ^jℓ + 2 ^-2^j ^ℓ See Section <ref> for a glossary of the various symbols introduced in the proof. § THE MAIN LEMMA A scalar-valued þ : ×^2 → and a symmetric traceless tensor field R^jℓ : ×^2 →^2×2 solve the SQG Reynolds equations if _t þ+ u^ℓ_ℓþ = _j _ℓR^jℓ u^ℓ = T^ℓþ= ^ℓa _a ||^-1 þ where || = √(-). The tensor R^jℓ is called the error since one has a solution when R = 0. Let (þ, u, R) be an SQG-Reynolds flow, Ξ≥ 1 and _u ≥_R ≥ 0 be non-negative numbers. Define the advective derivative D_t := _t + T^ℓþ_ℓ. We say that (þ, u, R) has frequency energy levels below (Ξ, _u, _R) to order L in C^0 if (þ, u, R) are of class C_t^0 C_x^L and the following statements hold _ þ_C^0 , _ u _C^0 ≤Ξ^|| e_u^1/2, || = 0, …, L _ R _C^0 ≤Ξ^|| _R, || = 0, …, L _ D_t þ_C^0, _ D_t u _C^0 ≤Ξ^|| (Ξe_u^1/2) e_u^1/2 || = 0, …, L-1 _ D_t R _C^0 ≤Ξ^|| (Ξe_u^1/2) _R, || = 0, …, L - 1 with e_u^1/2 = Ξ^1/2_u^1/2 and e_R^1/2 = Ξ^1/2_R^1/2. We note that, in contrast to other equations such as Euler, e_u and e_R will be large parameters. For L≥ 7, M_0 ≥ 1 η > 0 there is a constant = _L,η, M_0 > 1 such that the following holds: Given an SQG-Reynolds flow (þ, u,R) with frequency energy levels below (Ξ, _u, _R) to order L and a non-empty J_0 ⊆ with R ⊆ J_0 ⊆. Let N ≥N^6L N^4η Ξ^4η (_u/_R). Then there exists an SQG-Reynolds flow (þ, u, R) of the form þ = þ + W, u = u + T[W] with frequency energy levels below (Ξ, _u, _R) = (N Ξ, _R, N^-1/2 (D_R/D_u)^1/2 D_R) to order L in C^0. Furthermore the new stress R and the correction W are supported in the set R ∪W ⊆N(J_0) := { t + h  :  t ∈J_0, |h| ≤5 (Ξe_u^1/2)^-1 } Additionally, ||^-1/2 W satisfies the estimate _ ||^-1/2 W _C^0 ≤(NΞ)^|| _R^1/2 , || = 0, 1. It will be convenient to introduce the notation = N^1/L. We have Ξ=C N Ξ and e_u^1/2= Ξ^1/2 D_R^1/2. §.§ Summary Section The purpose of this section is to record where all the estimates of the Main Lemma are proven. The new frequency-energy levels for þ and u are verified in Proposition <ref>. The stress R on the other hand has many different components, and each one is estimated either by N^-1 D_R or N^-1/2 (D_u/D_R)^-1/2 D_R. The bounds for the mollification and quadratic errors in the Newton Step are obtained in Proposition <ref>. After iterations of the Newton step, the acceptable bound for the error R_()^jℓ follows from Proposition <ref>. The error terms in the convex integration step are defined in line (<ref>). The bounds for the transport error R_T and the high frequency interference terms R_H are obtained Section <ref>. The bounds for the mollification error R_M are obtained in Section <ref>. The bounds for R_S, which contains the stress erorr and flow error, are obtained in Section <ref>. The bound (<ref>) is a consequence of (<ref>) and (<ref>). Meanwhile, the bound (<ref>) is a consequence of (<ref>) and the construction of e_n^1/2(t) in line (<ref>), since the support of the convex integration preturbation and error are bounded by the support of e_n^1/2(t). § OVERALL GAMEPLAN Consider a given SQG-Reynolds flow (þ, u, R) with frequency energy levels below (Ξ, _u, _R) to order L and time support interval J_0 and let η > 0 be given. Our goal is to perturb the scalar field in such a way that the error will become smaller. This goal will be achieved in two steps, the first called the Newton step and the second called the convex integration step. Our new scalar field þ will have the form þ + w +, where w is called the Newton perturbation and is called the oscillatory perturbation, which arises in the convex integration step. The goal of the Newton perturbation is to perturb the scalar field so that the original stress R is replaced by a new R̃ that is supported on disjoint intervals, where in each interval R̃ can be canceled out by a one-dimensional convex integration perturbation. Doing so overcomes the difficulty in the convex integration step that waves oscillating in distinct directions are not allowed to interfere with each other. Constructing the Newton perturbation that achieves this localization will be achieved in a number = ⌈η^-1⌉ iterative steps indexed by n ∈{ 0, …, }. After the Newton perturbation we will add a high frequency perturbation that will be the sum of waves of the form = ∑_I _I ≈∑_I þ_I e^i ξ_I that will cancel out the “low frequency part” of what remains of the error, leaving behind an error that is small enough for the whole procedure to be repeated until the error is reduced to zero in the limit. Each wave has a conjugate wave _I̅ = _I, ξ_I̅ = - ξ_I, making real-valued. We define the sets F = {± (1,2), ± (2,1) } and = { (1,2), (2,1) }, which will be the directions in which the oscillatory waves of the convex integration stage oscillate. That is, ξ_I is reasonably (O(1)) close to an element of F. During the convex integration step, each wave _I + _I̅ is individually able to cancel out a “one-dimensional” component of the error that takes on the form - ^2 B^jℓ(ξ_I), where B^jℓ(p) = -i(^j m^ℓ(p) + ^ℓm^j)(p), where m^ℓ(p) = i ^ℓ a p_a |p|^-1 is the multiplier for SQG and where ^2 is a slowly varying smooth function that remains to be chosen. (Here we are implicitly using the Bilinear Microlocal Lemma of <cit.>.) Thus one of the first tasks that must be done is to decompose the (low frequency part of the) error into a linear combination of terms of this form. Before we perform this decomposition, we must define what we mean by the low frequency part of the error, which is the part that will be canceled out by the oscillatory perturbation . §.§ Regularizing the scalar field and error tensor Define the length scale = N^-1/LΞ^-1 = ^-1Ξ^-1, where L ≥ 2 is as given in the main lemma. We define an integer q_ such that q_ is close to log_2(^-1), i.e., we choose an integer q_ such that ^-1∼ 2^q_ and define the coarse scale scalar field þ_ and the coarse scale velocity field u_ to be þ_= P_≤q_ þ, u_^ℓ= T^ℓþ_, where the P_≤ q_ is a Littlewood-Paley projection operator in the spatial variables. In terms of the coarse scale velocity field we define the coarse scale advective derivative according to = _t + u_·. The estimates we obtain from this mollification are _ þ_ + _ u_ ≲_ ^(|| - L)_+ Ξ^|| e_u^1/2 _ þ_ + _ u_ ≲_ ^(|| + 1 - L)_+ Ξ^|| + 1 e_u These estimates follow from Definition <ref> and are proven in <cit.>. The error tensor R must be regularized before we attempt to cancel it out. We define R_ by mollifying η__x∗_x η__x∗_x R(t,x) only in the spatial variables at a length scale _x = N^-1/LΞ^-1, and using a mollifying kernel such that ∫ h^η(h) dh = 0 for all multi-indices 1 ≤ || ≤ L. Using the bounds in Definition <ref>, the estimates that we obtain from this construction are (see <cit.>) R - R_ ≲N^-1 D_R _ R_ ≲_ N^(|| - L)_+ Ξ^|| D_R _ R_ ≲_ (Ξe_u^1/2) N^(|| + 1 - L)_+ Ξ^|| D_R. The implicit constants in these estimates depend on L. §.§ Setting up the Newton iteration Define the cutoff frequency ≡ N^1/LΞ. Define = N^1/L so that = Ξ. The natural timescale is defined to be τ≡b (log)^-1 (Ξe_u^1/2)^-1 = b (log)^-1 (Ξ^3/2 _u^1/2)^-1, with b a small dimensionless constant that will be chosen later in this section. Consider a partition of unity 1 = ∑_k ∈χ_k^2, χ_k = χ(τ^-1(t - k τ)) for an appropriately chosen χ with compact support in [-4/5, 4/5] that is equal to 1 in [-1/3,1/3]. Consider a function e_0(t) with support in e_0(t) ⊆{ t + h  :  t ∈J_0, |h| ≤2(Ξe_u^1/2)^-1 } We re-write the SQG-Reynolds equation as _t þ+ _ℓ[ þT^ℓ[þ]] = _j _ℓ(R_^jℓ - e_0(t) M^jℓ) + _j_ℓ(R^jℓ - R_^jℓ) where M^jℓ is a constant matrix, which implies _j _ℓ M^jℓ = 0. The function e_0(t) will be just large enough so that e_0(t) M^jℓ dominates the term R_^jℓ. The cancellation we hope to achieve with the convex integration correction on each time interval [kτ - τ, kτ + τ] has roughly the form ∑_f ∈ _(k, f)^2 B^jℓ(ξ̌_(k,f)) = χ_k^2( e_0(t) M^jℓ - R_^jℓ ) M^jℓ ≡B^jℓ((1,2)) + B^jℓ((2,1)). (Note that M^jℓ is a 2-tensor in contrast to the positive number M_e.) We note that the main term in the right hand side of (<ref>) is the term e_0(t) M^jℓ. This fact is true for M_e sufficiently large depending on L because e_0(t) = M _e _R on the support of R_ (in view of the inequality _t < τ/4) whereas R__0 ≤ A _R for a constant A depending on L. The reason we can only solve (<ref>) on a short time interval is that we require ξ̌_(k,f) to be in a small O(1) neighborhood of the finite set F. At the same time, however, the functions ξ̌_(k,f) solve the transport equation: (_t + u_^j _j) ξ̌_(k,f) = 0 ξ̌_(k,f)(kτ, x) = f ·x. (We note that ξ̌ is well-defined on the torus thanks to the condition f ∈^2.) Although the equation (<ref>) will not be solved exactly until the convex integration step, it is necessary to outline how to solve (<ref>) for the purpose of setting up the Newton step. If it were true that R_ = 0 and the phase function gradients were replaced by the initial conditions ξ̌_(k,f) = f, then the solution to (<ref>) would simply be _(k,f)^2 = χ_k^2 e_0(t). We regard the full equation (<ref>) as a perturbation of this case. It is not hard to check that B^jℓ((1,2)) and B^jℓ((2,1)) form a basis for the two-dimensional space of trace-free symmetric tensor fields in which R_^jℓ takes values. The computation is done in <cit.>. Since B^jℓ(p) is a smooth function function of p, since the map taking a matrix to its inverse is smooth on its domain, which is open, and since by definition M^jℓ = B^jℓ((1,2)) + B^jℓ((2,1)), we can solve (<ref>) by factoring out the functions e_0(t) and χ_k^2 from both sides of (<ref>), inverting the linear system and taking square roots of the coefficients. The upshot is that we have _(k,f) = χ_k e_0^1/2(t) _f(M^jℓ - R_^jℓM_e D_R, ξ̌_k ) for a smooth function _f whose arguments are a symmetric trace-free tensor in a small O(1) neighborhood of M^jℓ and an array of vectors in a small O(1) neighborhood of the initial conditions (1,2), (2,1). Specifically ξ̌_k = [ ξ̌_(k, (1,2)), ξ̌_(k,(2,1)) ] is the array of phase gradients that solve (<ref>). By definition the implicitly defined functions _f(X, p) have a natural domain in which they are well-defined and smooth. This domain, being open, compactly contains a neighborhood of (M^jℓ, (1,2), (2,1)) that has the form X^jℓ - M^jℓ + p_1 - (1,2) + p_2 - (2,1) ≤c_1. As long as the constant M_e in the definition of e_0(t) is sufficiently large, the matrix in the argument of (<ref>), namely X^jℓ = M^jℓ - R_^jℓ(t,x) / (M_e _R), satisfies X^jℓ - M^jℓ≤ A M_e^-1≤ c_1/6. At this point we fix once and for all such a constant M_e depending on L to satisfy this constraint, so that e_0(t) is well-defined. Next, a by-now standard estimate (see <cit.>) for the difference between the phase gradient and its initial condition shows that when the constant b in the definition of the natural timescale τ is chosen small enough depending on c_1, the inequality ξ_(k,(1,2)) - (1,2) + ξ_(k,(2,1)) - (2,1) ≤ c_1 / 4 is satisfied. (Recall that (1,2) is the initial datum of ξ_(k,(1,2)) and similarly for (2,1).) We now fix b to have such a sufficiently small value. We are now in a position to begin explaining the Newton step. Initially we have an SQG Reynolds flow that solves the equation (<ref>). Our aim is to add a Newton correction w to þ that will replace the term (R_^jℓ - e_0(t) M^jℓ) with a sum of error terms that are “one-dimensional” with disjoint supports that can be canceled out by a convex integration argument, modulo other acceptable errors. Following <cit.>, we will need time cutoffs χ̃_k for the Newton correction that are a bit wider than the cutoffs χ_k defined previously. We require that * χ̃_k ⊆ (kτ - τ, k τ + τ) and χ̃_k = 1 on (kτ - 7τ/8, kτ + 7τ/8) so that χ̃_k χ_k = χ_k k ∈ * The estimates |_t^r χ̃_k| ≲_r τ^-r hold. The Newton correction w will have the form w = ∑_n ∑_kχ̃_k w_(k, n), (k, n) ∈×{0, …, } where the time index k ∈ refers to the correction being active on the interval (τ k - τ, τ k + τ), and n refers to the n'th iteration of the Newton step. Let þ_n and u_n^ℓ = T^ℓþ_n refer to the scalar field and velocity field after n Newton iteration steps. Thus, þ_n+1 = þ+ ∑_0 ≤j ≤n w_j, w_n = ∑_k ∈ χ̃_k w_(k, n). (We have θ_1 = θ + w_0. Note that in the notation þ = þ + w + Θ, we have w=∑_j=0^Γ w_j.) In the course of the iteration, the velocity field is updated as follows: θ_n+1 = θ_n + w_n u_n+1^ℓ = T^ℓθ_n+1 = T^ℓ (θ_n + w_n) = u_n^ℓ + T^ℓ w_n = u_n^ℓ + ∑_k χ_k u_J^ℓ, where J = (k,n) corresponds to the n'th step of the Newton iteration. The cutoffs embedded in w_n give rise to an error term called the gluing error for which we must solve _j _ℓR_(n+1)^jℓ = ∑_k _t χ̃_k(t) w_(k,n) with good estimates. One of the main novelties in our work lies in how this term is controlled. There are of course other error terms, which we now list in analogy with <cit.>. After n Newton steps, we have a system of the form _t þ_n + T^ℓþ_n _ℓþ_n = _j _ℓR_(n)^jℓ + _j _ℓS_(n)^jℓ + _j _ℓP_(n)^jℓ where * R_(n) is the gluing error was obtained by solving (<ref>) in the previous stage if n ≥ 1, while R_(0)^jℓ = R_^jℓ. * S_(n) is the error that will be canceled out by one-dimensional oscillations during the convex integration step. * P_(n) is the error that is small enough to be included in R in the next stage of the iteration, where P_(0) = R^jℓ - R_^jℓ. To be more specific we now explain how the Newton corrections w_n accomplish the goal of replacing R_(n) with “one-dimensional” errors with disjoint supports modulo acceptable terms. Obtaining disjoint supports will be done with the help of a family of periodic cutoff functions. We recall the following Lemma from <cit.>: For any ∈, there exist a family of smooth 1-periodic functions indexed by × (/2) ×{1, …, } with the property that ∫_0^1 g_(f, [k], n)^2 = 1 ∀  (f, [k], n) ∈×(/2) ×{0, …, } and g_(f, [k], n) ∩g_(f', [k'], n') = ∅ whenever (f,[k],n) ≠ (f', [k'], n') ∈× (/2) ×{1, …, }. For each index J ∈×{1,…, }, J = (k,n) we set [f,J] = (f,[J]) = (f, [k], n) with [k] the residue class of [k] ∈/2. The equation we solve at the n'th Newton step has the form _t w_J + T^ℓþ__ℓw_J + T^ℓw_J _ℓþ_ = ∑_f ∈ (1 - g_[f,J]^2(μt) ) _j _ℓA_(f,J)^jℓ =: _j _ℓO_J^jℓ w_(k,n)(kτ, x) = w_0,(k,n) where μ is an inverse time scale to be chosen slightly faster than the natural time scale τ, w_0,(k,n) is a scalar field to be specified shortly in line (<ref>), and where A_(f,J) has the following “one-dimensional” form A_(f,k,n)^jℓ = χ_k^2(t) e_n(t) _f^2( M^jℓ - R_(n)^jℓM_e D_R,n, ξ̌_k ) B^jℓ(ξ̌_k,f) similar to (<ref>). Here D_R,n = (N^-η Ξ^-η)^n D_R,0 is a bound on the size of the nth gluing error. Meanwhile e_n(t), similar to e_0(t), is a function of time equal to the constant M_0 D_R,n on the interval J_n = { t + h  :  t ∈ J, |h| ≤ 3(n+1) τ} that has support in { t + h  :  t ∈ J_n, |h| ≤ 2 τ} while satisfying the estimates d^rdt^r e_n ≲τ^-r D_R,n. At this point we will specify that μ= N^1/2 Ξe_R^1/2 = N^1/2 Ξ^3/2 _R^1/2. Note that μ is an inverse time scale with this choice. We have τ > 1/μ. With such a choice of Newton correction, the errors after the n+1'th step solve the following system of equations þ_n+1 = þ_n + w_n = þ+ ∑_j=0^n w_j _j _ℓR_(n+1)^jℓ = ∑_k _t χ̃_k w_(k,n) S_(n+1)^jℓ = S_(n)^jℓ - ∑_k ∈ ∑_f ∈ g_(f, [k], n)^2(μt) A_(f,k,n)^jℓ _j _ℓP_(n+1) = _j _ℓP_(n) + T^ℓ(þ- þ_) _ℓw_n + T^ℓw_n _ℓ(þ- þ_) + T^ℓw_n _ℓw_n + ∑_j = 0^n-1 (T^ℓw_n _ℓw_j + T^ℓw_j _ℓw_n) Notice that the terms in (<ref>) and (<ref>) are not in the form of a second-order divergence of a trace-free tensor field, in contrast to the analogous terms for Euler, which are readily of the correct form. Handling this new issue and getting good estimates for the solutions to (<ref>)-(<ref>) is another of the main contributions of this paper. §.§ Newton Step It is clear from (<ref>) that in order to bound the gluing error we must find a solution to the second order divergence equation _j _ℓr_J^jℓ = w_J with good estimates. While it is important that we find a solution that is symmetric and trace-free, we have the freedom to first find a solution r_J that lacks these properties and then use the estimates on r_J to bound the potential-theoretic solution to (<ref>). Following <cit.> and <cit.> we derive a transport-elliptic equation to get a solution with good bounds. We start by finding a first-order antidivergence z_J^i, which solves _i z_J^i = w_J. Consider a solution to the equation (_t + T^ℓþ__ℓ) z_J^i = _a T^i þ_z_J^a - T^i w_J þ_- _a O_J^ia z_(k,n)^i(tk, x) = z_0,(k,n)(tk) with smooth initial data to be specified below in line (<ref>) such that _i z_0,(k,n)^i(tk) = w_0,(k,n)(tk). The existence of a solution to (<ref>) follows from standard existence theory for transport equations by the method of characteristics. It is not difficult to check that if z_J^i solves (<ref>), then _i z_J^i, the divergence of z_J, satisfies _i z_J^i = w_J and thus equals w_J as long as it does so initially. Thus z_J^j is an anti-divergence for w_J. We now wish to find an anti-divergence for z_J^j. Using the fact that the divergence of z_J^j is w_J, we can rewrite equation (<ref>) as (_t + T^ℓþ__ℓ) z_J^i = _a T^i þ_z_J^a - _a T^i z_J^a þ__special term - _a O_J^ia The special term has a structure that makes it possible to be put in divergence form. Ultimately the most important point is that the operator _a T^i has a symbol that is even (and degree 1 homogeneous) and the fact that a minus sign appears (which together imply that the term has integral zero). Thus we claim _a T^i[þ_]z_J^a - _a T^i[z_J^a] þ_ = _j ^ij_a[ z_J^a, þ_] where _a^ij is a bilinear form that we will be able to estimate. In terms of this anti-divergence, define r_J^ij to be the unique solution to (_t + T^ℓþ__ℓ) r_J^ij = ^ij_a _ℓ[ _b T^ℓþ_r_J^ab ] + ^ij_b[z_J^b, þ_] - O_J^ij r_(k,n)^ij(tk, x) = r_0,(k,n)^ij, where the initial data specified in line (<ref>) satisfies _i r_0,(k,n)^ij(tk) = z_0,(k,n)^i(tk). Here ^ij_a is as defined in Section <ref>. The existence and uniqueness of a smooth solution r_J to (<ref>) follow from a contraction mapping argument (see the Appendix to <cit.>). Note that the divergence of r_J^ij solves the PDE _i r_J^ij = z_J^j with the same initial conditions as z_J. Thus r_J^ij is a second order anti-divergence for w_J. In the remainder of this section we show that the structure of the transport equations satisfied by w_J, z_J and r_J imply good estimates on these quantities and all the error terms they generate. Having good estimates for r_J then implies good estimates for a trace-free second order anti-divergence ρ_J. The estimate for w_J will take advantage of the oscillations in time of the forcing term in the equations. These oscillations are ultimately the source of the gain in performing the Newton step. To capture the gain, let h_f,[J](T) = ∫_0^T (1 - g_f,[J]^2(s) ) ds and decompose w_J = w̅_J + w̃_J, where w̃_J = ∑_f ∈ μ^-1 h_f,[J](μt) _j _ℓA_(f,J)^jℓ w_0,(k,n) = w̃_J(tk) _t w̅_J + T^ℓþ__ℓw̅_J = - T^ℓw_J _ℓþ_- Õ_J Õ_J = ∑_f ∈ μ^-1 h_f,[J](μt) _j _ℓA_(f,J)^jℓ We also decompose z_J = z̅_J + z̃_J, where z̃_J^i = ∑_f ∈ μ^-1 h_f,[J](μt) _j A_(f,J)^ij z_0,(k,n)^i = z̃_(k,n)^i(k t) ∂_t z̅_J^i + T^ℓθ_∇_ℓz̅_J^i = ∇_a T^i θ_z_J^a - T^i w_J θ_- Õ_J^i Õ_J^i = ∑_f ∈ μ^-1 h_f,[J](μt) ∇_a A_(f,J)^ia We similarly decompose r_J = r̅_J + r̃_J, where r̃_J^ij = ∑_f ∈ μ^-1 h_f,[J](μt) A_(f,J)^ij r_0,(k,n)^ij(tk,x) = r̃_J^ij(tk, x) ∂_t r̅_J^ij + T^ℓθ_∇_ℓr̅_J^ij = ^ij_a ∇_ℓ[ ∇_b T^ℓθ_r_J^ab ] + ^ij_b[z_J^b, θ_] - Õ_J^ij Õ_J^ij = μ^-1 ∑_f ∈ h_f,[J](μt) A^i j_(f,J). The terms we need to estimate include not only the scalar fields w̅_J, w̃_J and the fields z_J and r_J, but also the fields u_J^ℓ = T^ℓ w_J and a trace-free symmetric tensor field ρ_J^iℓ that is defined by ρ_J^iℓ = ^iℓ w_J = ^iℓ_a _b r_J^ab whose second order divergence is w_J (∇_i_ℓρ^iℓ_J=w_J). The operator is the order -2 operator defined in Section <ref>. We will associate to each of these tensor fields F in our problem a positive number S_F that is the “size” of F. The following table summarizes the sizes of the fields c|ccccc F w̅_J, w_J z̅_J^i, z_J^i r̅_J^ij, r_J^ij u_J^ℓ ρ_J^iℓ S_F Ξ^2 μ^-1 D_R,n Ξμ^-1 D_R,n μ^-1 D_R,n Ξ^2 μ^-1 D_R,n μ^-1 D_R,n , Thus S_w = Ξ^2 μ^-1 D_R,n, S_z = Ξμ^-1 D_R,n, etc. For convenience we remind the reader of the choice of μ = N^1/2Ξ e_R^1/2 = N^1/2Ξ^3/2 D_R^1/2 from (<ref>). We use the notation F_J = { w̅_J, z̅_J, r̅_J } to denote the list of tensor fields involved in the main estimate that solve transport type equations for which we require a sharp bound. We are now ready to estimate the terms in the Newton step. Define L:= L-3. The following is the main result of this section. For all F ∈ F _J ∪{ w_J, z_J, r_J }, we have the estimates _ F ≲_ N^(|| -)_+ Ξ^|| S_F, Moreover, we have the following bounds for w_n, u_n^ℓ = T^ℓ w_n and ρ_n^jℓ = ^jℓ w_n _ ||^-1/2 w_n Ξ^|| D_R,n^1/2, 0 ≤|| ≤1 _ u_n ≲_ N^(|| -)_+ Ξ^|| S_u ∇_a⃗ D_t^r u_n _C^0 ≲_ N^(r+|a⃗| -L)_+ Ξ^|a⃗|(Ξe_u^1/2)^r e_u^1/2, 0 ≤r ≤1 _ ^r ρ_J ≲_ N^(|| +r -)_+ Ξ^|| τ^-r S_ρ χ̃_k'(t) Furthermore there exists a symmetric, trace-free tensor field R_(n+1) with support in { t + h  :  t ∈ J_n, |h| ≤ 2τ} that solves (<ref>) and satisfies the bounds for R_(n+1) ≤D_R,n+1 _ R_(n+1) ≲_ N^(|| - L)_+ Ξ^|| D_R,n+1 _ R_(n+1) ≲_ N^(|| + 1 - L)_+ Ξ^|| τ^-1 D_R,n+1 (w_n,R_(n+1)) ⊆{ t + t'  :  t ∈R_(n), |t'| ≤3 τ} Note that (<ref>) has implicit constant 1. We will need the following bounds on the phase functions. The phase function gradients satisfy _ ξ̌_J ≲_ ^(|| + 1 - L)_+ Ξ^|| _ ξ̌_J ≲_ ^(|| + 1 - L)_+ Ξ^||+1 e_u^1/2 These bounds can be found in <cit.>. They require only (<ref>) and (<ref>). The following weighted norm will be handy The start-weighted norm of a function F is H_ζ,M^(R)[F] = max_0 ≤r ≤R max_0 ≤|| + r ≤M _ ^r F _C^0^(|| + 1 - L)_+Ξ^|| ζ^r Note that R ∈{ 0, 1} is a number, not to be confused with the stress tensor. When we run into terms that involve a mix of spatial and advective derivatives, the following Lemma is useful. This lemma will be applied to Õ. For any multi-indices , such that || + || ≤ M and for ζ≥Ξ e_u^1/2 we have _ _ F ^(|| + || + 1 - L)_+ Ξ^|| + || ζH_ζ,M^(1)[F] Let M be given. We proceed by induction on || ≤ M. The case || = 0 follows directly from the definition of H_ζ,M^0[F]. Now assume the bound holds for || - 1, and write _ = _b_1_b̌ where |b̌| = || - 1. We have _ _F = _ _b_1 _b̌F - _[_b_1 u_^i _i _b̌ F ] _ _F ≤^(|| + || + 1 - L)_+ Ξ^|| + || ζH_ζ,M^(1)[F] + ∼∑ __1 _b_1 u_^i __2 _i _b̌ F ^(|| + || + 1 - L)_+ Ξ^|| + || ζH_ζ,M^(1)[F] + ^(|_1| + 1 - L)_+ ^( |_2| + 1 + |b̌| - L)_+ Ξ^|| + || Ξe_u^1/2 H_ζ,M^(1)[F] ^(|| + || + 1 - L)_+ Ξ^|| + || ζH_ζ,M^(1)[F] We also have a chain rule for the weighted norm. K be a compact neighborhood of the image of (Ř = R_(n) /D_R, ξ_k) and let G be C^∞ on a neighborhood of K. Then H_Ξe_u^1/2,M^(R)[G(Ř, ξ_k)] _M,K, G 1 By the chain and product rules we have _ ^r G(Ř, ξ_k) = ∑_m=0^|| + R ∼∑ ^m G ∏_i=1^m_1 __i ^r_i Ř ·∏_j=1^m_2 __j ^r_j ξ_k where the sum ranges over indices such that ∑ |_i| + ∑ |_j| = || and ∑_i r_i + ∑_j r_j = r and the empty product is 1. Hence _ ^r G(Ř, ξ_k) ∑_m=0^|| + R∼∑ ∏_i=1^m_1 ^(|_i| + r_i- L)_+ Ξ^|_i| (Ξe_u^1/2)^r_i · ·∏_j=1^m_2 ^(|_j| +1 - L)_+ Ξ^|_j| (Ξe_u^1/2)^r_j ∑_m=0^|| + R Ξ^|| (Ξe_u^1/2)^r ∼∑ ^(∑_i(|_i| + r_i)- L)_+ ^(∑_j |_j| + 1 - L)_+ where the last line we used the counting inequality with z = L and z = L -1. The proof now follows from (∑_i (|_i| + r_i)- L)_+ + (∑_j |_j| + 1 - L)_+ ≤ ≤(∑_i|_i| + 1 - L)_+ + (∑_j |_j| + 1 - L)_+ ≤(|| + 1 - L)_+ The following proposition summarizes the bounds on terms that do not solve a transport equation For all , we have the bounds _ w̃_J + ^- _ w̃_J _ ^(|| - )_+ Ξ^|| S_w _ z̃_J + ^- _ z̃_J _ ^(|| - )_+ Ξ^|| S_z _ r̃_J + ^- _ r̃_J _ ^(|| - )_+ Ξ^|| S_r and _ Õ_J + ^- _ Õ_J ^(|| - )_+ Ξ^|| (Ξe_u^1/2) S_w _ Õ_J^i + + ^- _ Õ_J^i ^(|| - )_+ Ξ^|| (Ξe_u^1/2) S_z _ Õ_J^ij + ^- _ Õ_J^ij ^(|| - )_+ Ξ^|| (Ξe_u^1/2) S_r It suffices to prove the bounds for the C^0 norms since they imply the bounds on the Ċ^ norms by interpolation. We only prove the bounds for w_J and O_J since the other bounds will then be similar. (These bounds are not sharp for the other quantities, but this is not important.) The bound for _ w_J_0 follows from (<ref>). The bound for _ O_J_0 follows by taking _b⃗ = divdiv (thus |b⃗|=2) and F=A_J in Lemma <ref>. This choice yields _ O_J_0 μ^-1^δΞ^||+2 (Ξ e_u^1/2) H_Ξ e_u^1/2,M^(1)[A_J] , δ := (N - )_+ μ^-1^δΞ^||+2 (Ξ e_u^1/2) D_R,n = S_w ^δΞ^|| (Ξ e_u^1/2) which is the desired bound. In addition to the proof, we provide a heuristic argument. Recall that Õ_J = ∑_f ∈μ^-1 h_f,[J](μ t) _j _ℓ A_(f,J)^jℓ, where A_(f,J)^jℓ has size D_R and frequency τ^-1. Thus acting on A_(f,J)^jℓ costs a factor of τ^-1. Also, h_f,[J](μ t) ≲ 1. Therefore, _Õ_J ≲μ^-1τ^-1^(|| -)_+Ξ^||+2 D_R = ^(|| -)_+Ξ^||+1 e_u^1/2 S_w. The proof of Proposition <ref> relies on the following weighted norm. (t) = ∑_F ∈F_J ∑_|| ≤L' (S_F ^(|| -)_+ Ξ^||)^-1 ( _ F + ^- _ F) Here and in what follows we suppress the dependence of (t) on the index J and on L'. We write _L' to emphasize dependence on L'. Notice that (t) vanishes at the initial time t_J = t_(k,n) = kτ. In the following analysis, we simplify notation by assuming the initial time is t_J = 0. We have the estimate (t) ≤C Ξe_u^1/2 ∫_[0,t] (1 + (s) ) ds. for some C>0 independent of the frequency energy levels (Ξ, D_u, D_R) and independent of N, but C is allowed to depend on the step n of the Newton iteration and on L', the order of differentiation that controls. In particular, by Gronwall, (t) ≲ 1 for |t| ≤τ. Recall the notation for the integer satisfying ∼ 2^. The following criterion will be useful for bounding (t) For any function f ∈ L^∞(^d) and any multi-index we have that _ f + ^- _ f ≲_ f + ^-sup_q > 2^q P_q _ f (In fact, the two sides are equivalent up to constants.) Indeed, this lemma follows quickly from the following standard Littlewood-Paley characterization of Ċ^ seminorm, f∼sup_q 2^ q P_q f _C^0, which is valid for f ∈ L^∞. (A proof can be found in the appendix to <cit.>, for example.) We will also use the following Lemma about commuting spatial derivatives and Littlewood-Paley projections with the advective derivative. We define f(t) := sup_x |f(t,x)|. For any F ∈ F_J and any multi-index of order || ≤ L' we have _ F(t) ≤∫_0^t _ F(s) ds _ F(s) = _ F(s) + O(^(|| -)_+ Ξ^||+1 e_u^1/2 S_F (s)), _ P_q F(s) = P_q _ F(s) + min{ 1,2^-q ^ } O(^(|| -)_+ Ξ^||+1 e_u^1/2 S_F(s)) Inequality (<ref>) is a consequence of the method of characteristics. The other bounds in this lemma are special cases of Lemma <ref> below where we take Q to be the identity map. Based on Lemmas <ref> and <ref>, the proof of the Main Proposition (Proposition <ref>) reduces to the following For any F ∈ F_J we have for all || ≤ L' and all q > the bounds _ F(s) ≲ ^(|| -)_+ Ξ^|| τ^-1 S_F (1 + (t)) P_q _ F(s) ≲2^-q ^ ^(|| -)_+ Ξ^|| τ^-1 S_F (1 + (t)) The following standard spatial derivative bounds will be used. If is a convolution operator whose symbol is degree 0 homogeneous and smooth away from 0 then _ P_q F ≲2^-q _ F _ F _ F _ F ≲ _ F + ^- _ F Let η_q(h) = 2^d qη_0(2^q h) be the convolution kernel representing P_q. Then η_q has integral 0 and (<ref>) follows from _ P_q F = ∫(_F(x + h) - _F(x)) η_q(h) dh |_P_q F| ≤_F ∫|h|^|η_q(h)| dh. The second bound follows from the first one and the Littlewood Paley characterization of Hölder spaces. The third estimate is obtained by summing _ F ∑_q=0^ P_q _ F + ∑_q = ^∞2^-q_ F For || ≤ L' we have _ u_J _C^0 + ^-_ u_J _Ċ^̇α̇^(||-L)_+Ξ^||S_u (1 + ). We use Proposition <ref> with = T and F = w_J. This result gives for || ≤ L': _ u_J = _ T w_J≲_ w_J + ^-α_ w_J. This is in turn bounded by ≲^(|| -)_+Ξ^|| S_w (1+) + ^(|| -)_+Ξ^|| S_w (1+) ≲^(|| -)_+Ξ^|| S_u (1+) where we have used the definition of =_L'. For the Ċ^ bound, we have _u_J _w_J ^S_w ^(||-)_+Ξ^||(+1) where the first inequality follows by Proposition <ref> and the second inequality follows from the definition of . For the C^0 bound on _w̅_J, we use the equation (<ref>): _w̅_J ≲_(T^ℓ w_J _ℓþ_) + _Õ_J ≲__1 T^ℓ w_J__2_ℓþ_ + _Õ_J. For the first term, we use the bounds of Lemma <ref> for u_J = T^ℓ w_J and the bounds in the Main Lemma for þ_: __1 T^ℓ w_J__2_ℓþ_ ≲^(|_1| -)_+Ξ^|_1| S_u(1 + (t)) ·^(|_2| -)_+Ξ^|_2|+1 e_u^1/2 ≲^(|| -)_+Ξ^||+1 e_u^1/2 S_u (1 + (t)). For the _ O_J_0 term, a sufficient bound was already obtained in Proposition <ref>. For the high frequency bound, we recall the equation for w̅_J (<ref>): _t w̅_J + T^ℓþ__ℓw̅_J = - T^ℓ w_J _ℓþ_ - Õ_J We apply P_q _ to both sides of the equation: P_q _w̅_J = -P_q _(T^ℓ w_J _ℓþ_) - P_q _Õ_J By the product rule, P_q _w̅_J = - P_q (__1 T^ℓ w_J __2_ℓþ_) - P_q _Õ_J Then by the triangle inequality and the Holder inequality for the C^0 norm, P_q _w̅_J ≤P_q (__1 T^ℓ w_J __2_ℓþ_) + P_q _Õ_J By the C^0 bounds of Lemma <ref> for u_J = T^ℓ w_J, the bounds in (<ref>) for þ_, and the C^0 bound for _ O_J_0 from Proposition <ref>: P_q _w̅_J ≲ 2^-α q( __1 u^ℓ_J__2_ℓþ_ + __1 u^ℓ_J__2_ℓþ_) + P_q_Õ_J ≲ 2^-α q^(|_1| -)_+Ξ^|_1| S_u(1+(t)) ·^α^(|_2|-(L-1))_+Ξ^|_2|+1e_u^1/2 + 2^-α q^^(|_1| -)_+Ξ^|_1| S_w(1+(t)) ·^(|_2|-(L-1))_+Ξ^|_2|+1e_u^1/2 + 2^- q_Õ_J (2^- q^(||-)_+ Ξ^||+1e_u^1/2S_u(1+)^) + 2^- q^(||-)_+ Ξ^||+1e_u^1/2S_w ^ 2^- q^(||-)_+ Ξ^||+1e_u^1/2S_u(1+)^ where the second line follows from Lemma <ref>. We have bounded the Õ_J term using Proposition <ref>. The bound we used on __2_ℓþ_ above follows by the interpolation F_Ċ^αF^1-∇ F^ with F = __2_ℓþ_, which yields __2_ℓþ_^(|_2|+1-L)_+^αΞ^|_2|+1+e_u^1/2 Thus P_q _w̅_J ≲ 2^-α q^α^(|| -)_+Ξ^||+1 e_u^1/2 S_u (1+(t)) ≤ 2^-α q^α^(|| -)_+τ^-1 S_w(1 + (t) ). For the purpose of estimating the velocity increment u_J^ℓ = T^ℓ w_J we will use the following estimates, with being either T or divdiv: Suppose is a Fourier-multiplier with a degree zero homogeneous symbol that is smooth away from the origin. Then for any F ∈{ w_J }∪{ r_J } and || ≤ L' D_t _ F(t) = _ D_t F(t) + O(^(|| -)_+ Ξ^||+1 e_u^1/2 S_F (1 + (t))) D_t _ P_q F(t) = P_q _ D_t F(t) + min{ 1, 2^-q ^ } O(^(|| -)_+ Ξ^||+1 e_u^1/2 S_F (1 + (t))) (Note that P_q commutes with _ and with but not with .) Start with _ F = _ F + [_, ] F [_, ] F = ∼∑__1 u_^i __2 _i F 1_|_1| ≥1 where the sum is over certain a⃗_1, a⃗_2 with |_1| + |_2| = ||. Now apply the operator _q = P_q to obtain _q _ F = _q _ F + [_q, ] _ F + _q [_, ] F We start with the third term. Since _q localizes an order 0 operator to frequency 2^q, we have Q_q[f]≲min{f, 2^- qf}, hence _q [_, ] F ≲∼∑ __1 u_^i __2 _i F 1_|_2| ≤L' - 1 ≲∼∑ [^((|_1| - 1) + 1 - L')_+ Ξ^|_1| e_u^1/2] 1_|_1| - 1 ≥0 [^(|_2| + 1 -)_+ Ξ^|_2| S_F (1 + (t))] ≲^(|| -)_+ Ξ^|| S_F (1 + (t)) where in the last line we applied the counting inequality with (|_1| - 1, |_2| + 3, L'-1) all ≥ 0. We also have _q [_, ] F ≲ 2^-q ∼∑ ( __1 u_^i __2 _i F + __1 u_^i __2 _i F )1_|_2| ≤L' - 1 ≲∼∑ 2^-q ^ [^((|_1| - 1) + 1 - L)_+ Ξ^|_1| e_u^1/2] 1_|_1| - 1 ≥0 [^(|_2| + 1 -')_+ Ξ^|_2| S_F (1 + (t))] ≲2^-q ^ ^(|| -')_+ Ξ^ S_F (1 + (t)) We conclude by estimating [, _q] _ F = ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) _j _ F(x-h) Q_q(h) dh = - ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) _j^(h) _F(x-h) Q_q(h) dh = - ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) _j^(h) (_F(x-h)-_F(x)) Q_q(h) dh = ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) (_F(x-h)-_F(x)) _j Q_q(h) dh [, _q] _ F ∇u_ϵ_C^0 _F_Ċ^α ∫|h|^1+α |∇Q_q(h)| dh [Ξe_u^1/2] [^(|| -')_+ Ξ^|| S_F ((t)+1) ^] [2^-αq] and a similar integration by parts yields [, _q] _ F = ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) _j _ F(x-h) Q_q(h) dh = - ∫(u_ϵ^j(x-h) - u^j_ϵ(x)) _j^(h) _F(x-h) Q_q(h) dh [, _q] _ F = ∇u_ϵ_C^0 F_C^0 ∫|h|^1 |∇Q_q(h)| dh [Ξe_u^1/2] [^(|| -')_+ Ξ^|| S_F ((t)+1) ] [1]. Combining these bounds concludes the proof. For || ≤ L' write [_, ] F = ∑_q = 0^∞ [_ _q, ] F [_, ] F ≤∑_q = 0^- 1 Ξe_u^1/2 [^(|| - )_+ Ξ^|| S_F ( 1 + (t) ) ] + ∑_q = ^∞2^-q ^ Ξe_u^1/2 [^(|| - )_+ Ξ^|| S_F ( 1 + (t)) ] Since log and 2^-^ 1, we obtain the desired bound. We need to show that for all || ≤ L' and all q >, we have the bounds _ z̅_J(s) + 2^q ^- P_q _ z̅_J(s) ≲^(|| -)_+ Ξ^||+1 e_u^1/2 S_z (1 + (t)) For the C^0 bound on _z̅_J, we use the equation (<ref>): _z̅_J^i ≲_(∇_a T^i θ_ z_J^a - T^i w_J θ_ - Õ_J^i) ≲__1∇_a T^i θ___2 z_J^a + __1 T^i w_J__2θ_ + _Õ_J^i. For the first term, we use the bounds for θ_ from Lemma <ref> and the inductive hypothesis for z_J: __1∇_a T^i θ___2 z_J^a ^(|_1|+1-L)_+Ξ^|_1|+1e_u^1/2·^(|_2| -)_+Ξ^|_2| S_z (1 + (t)) ^(|| -)_+Ξ^||+1 e_u^1/2 S_z (1 + (t)). For the second term, we use the bounds (<ref>) for u_J^i = T^i w_J: __1 T^i w_J__2θ_ ≲^(|_1| -)_+Ξ^|_1| S_u (1 + (t)) ·^(|_2|-L)_+Ξ^|_2|e_u^1/2 ^(|| -)_+Ξ^||+1 e_u^1/2 S_z (1 + (t)) logΞ, The _ O_J^i_0 term was already bounded in Proposition <ref>. We have _ O^i_J_0^(|| -)_+Ξ^||+1 e_u^1/2 S_z (1 + (t)) The bound on the high frequency projection P_q is proved very similarly to the high frequency bound for w̅_J. Recall that r̅_J satisfies the equation (<ref>): r̅_J^ij = ∂_t r̅_J^ij + T^ℓθ_ϵ∇_ℓr̅_J^ij = ^ij_a ∇_ℓ [ ∇_b T^ℓθ_ϵ r_J^ab ] + ^ij_b[z_J^b, θ_ϵ] - Õ_J^ij We need to show that for all || ≤ L', we have the bounds _D_t r̅_J(s) ≲^(|| -L)_+Ξ^||τ^-1 S_r (1 + (t)) For the C^0 bound (<ref>), we need to bound _ (^ij_a ∇_ℓ [ ∇_b T^ℓθ_ϵ r_J^ab ]) + _ (^ij_b[z_J^b, θ_ϵ]) - _Õ_J^ij. For the first term, we use Proposition <ref> to obtain the bound _ (^ij_a ∇_ℓ [ ∇_b T^ℓθ_ϵ r_J^ab ])(log)_( u_ r_J) + ^-_( u_ r_J). Then _( u_ r_J) ^(|_1|+1-L)_+Ξ^|_1|+1e_u^1/2· S_r(1+)^(|_2|-)_+Ξ^|_2| ^(||-)_+Ξ^||(Ξ e_u^1/2) S_r(1+) and thus (log)_( u_ r_J)^(||-)_+Ξ^||τ^-1 S_r(1+). Using the product rule for Ċ^α norms, we have _( u_ r_J) ≲__1 u___2 r_J ≲ (__1 u___2 r_J + __1 u___2 r_J). For the first term in the sum, we use the interpolation inequality for Hölder norms to get __1 u_ ≲__1 u_^1-α__1 u_^α≲^(|_1|+2-L)_+Ξ^|_1|+1 e_u^1/2^α __2 r_J ≲^(|_2|-)_+Ξ^|_2| S_r (1+). Thus, the first term is bounded by __1 u___2 r_J^(||-)_+Ξ^||(Ξ e_u^1/2) ^α S_r(1+) For the second term, we have __1 u_≲^(|_1|+1-L)_+Ξ^|_1|+1 e_u^1/2, and by the definition of , __2 r_J≲^(|_2|-)_+Ξ^|_2| S_r (1+) ^α. Thus, the second term is bounded by __1 u___2 r_J ≲^(|_1|+1-L)_+ + (|_2|-)_+Ξ^||+1 e_u^1/2 S_r (1+) ^α ≲^(||-)_+Ξ^||+1 e_u^1/2 S_r (1+) ^α. Combining these estimates, we get _( u_ r_J) ≲^(||-)_+Ξ^||+1 e_u^1/2 S_r (1+) ^α. Therefore, using Proposition <ref>, we have ^-α_( u_ r_J) ≲^(||-)_+Ξ^||+1 e_u^1/2 S_r (1+) ≲^(||-)_+Ξ^||τ^-1 S_r (1+), where the last inequality uses Ξ e_u^1/2≲τ^-1. We conclude _ (^ij_a ∇_ℓ [ ∇_b T^ℓθ_ϵ r_J^ab ])^(||-)_+Ξ^||τ^-1 S_r (1+). For q ≥ we must also bound P_q of this term. To do so, we recall the estimate on the Ċ^ norm that we just proved to obtain P_q _ (^ij_a ∇_ℓ[ ∇_b T^ℓθ_ϵr_J^ab ]) 2^-q [ ∇_b T^ℓθ_ϵr_J^ab ] 2^- q ^ ^(||-)_+ Ξ^|| τ^-1 S_r (1+), which is our desired bound. It now remains to estimate the other two terms. For the forcing term Õ^ij the desired estimates follow directly from Proposition (<ref>) and the Littlewood Paley characterization of the Ċ^ norm. A more involved analysis is necessary for the term. The term This section is one of the main novelties of our analysis. We now define and estimate the term, which is required to satistfy _j ^jℓ_a[z_J^a, þ] = _a T^ℓ[z_J^a] þ- z_J^a _a T^ℓ[þ] We first decompose the right hand side as a paraproduct _a T^ℓ[z_J^a] þ- z_J^a _a T^ℓ[þ] = LH + HL + HH LH = ∑_q P_≤q-1 _a T^ℓ[z_J^a] P_q+1 þ- P_≤q-1 _a T^ℓ[þ] P_q+1 z_J^a HL = ∑_q P_q+1 _a T^ℓ[z_J^a] P_≤q-1 þ- P_q+1 _a T^ℓ[þ] P_≤q-1 z_J^a HH = ∑_q _a T^ℓ[P_q+1 z_J^a] P_q+1 þ- _a T^ℓ[P_q+1 þ] P_q+1 z_J^a + ∑_q _a T^ℓ[P_q+1 z_J^a] P_q þ- _a T^ℓ[P_q þ] P_q+1 z_J^a + ∑_q _a T^ℓ[P_q z_J^a] P_q+1 þ- _a T^ℓ[P_q+1 þ] P_q z_J^a Note that the HL and LH terms both live at frequency 2^q. For these we apply an order -1 operator _a^jℓ that solves the divergence equation. For the high-high terms, we invoke the divergence form principle of Section <ref> (in particular the fact that the multiplier for T is even and the fact that a minus sign appears) to write them as the divergence of a bilinear convolution. Hence, ^jℓ_a[z_J^a, þ] = ^jℓ_H + ^jℓ_LH + ^jℓ_HL ^jℓ_LH = ∑_q P_≈q ^jℓ_b[ P_≤q-1 _a T^b [z_J^a] P_q+1 þ_- P_≤q-1 _a T^b[þ_] P_q+1 z_J^a ] ^jℓ_HL = ∑_q P_≈q _b^jℓ [ P_q+1 _a T^b[z_J^a] P_≤q-1 þ_- P_q+1 _a T^b[þ_] P_≤q-1 z_J^a] _j ^jℓ_H = HH ^jℓ_H = ∑_q K_qa^jℓ ∗[ z_J^a, þ_] = ∑_q K_qa^jℓ ∗[ P_≈q z_J^a, P_≈q þ_] = ∑_q ∫_^2 ×^2 z_J^a(x - h_1) þ_(x - h_2) K_qa^jℓ(h_1,h_2) dh_1dh_2 where K_qa^jℓ(h_1,h_2) is a Schwartz function on ^2 ×^2 and K_qa^jℓ(h_1, h_2) = 2^4q K_0a^jℓ(2^q h_1, 2^q h_2). We begin by estimating the high-high term. We decompose into high and low frequencies, observing that the spatial derivatives commute with the bilinear convolution kernel _H^jℓ = ∑_q = 0^-1 K_qa^jℓ∗[z_J^a, þ_] _ _H^jℓ = ∑_q=0^-1 K_qa^jℓ∗[__1z_J^a, __2 þ_] _ _H^jℓ ∑_q=0^-1 K_qa^jℓ _L^1 __1z_J^a __2 þ_ ∑_q=0^ ^(|_1| - )_+ Ξ^|_1| S_z ( 1 + ) [^(|_2| - )_+ Ξ^|_2| e_u^1/2 ] (log) ^(|| - )_+ Ξ^|| (Ξe_u^1/2) S_r(1+ ) For the high frequencies we bound _H^jℓ = ∑_q = ^∞K_qa^jℓ∗[P_≈q z_J^a, P_≈q þ] _ _H^jℓ ∑_q = ^∞ K_qa^jℓ_L^1 P_≈q __1 z_J __2þ_ ∑_q=^∞2^-q __1 z_J __2þ_ ^- ^(^(|_1| - )_+ Ξ^|_1| S_z(1+)) (^(|_2| - )_+ Ξ^|_2| e_u^1/2) (Ξe_u^1/2) ^(|| - )_+ Ξ^|| S_r (1+) Finally, for q' >, we bound P_q'__H^jℓ by observing that, due to frequency truncation, only terms with q > q' - 2 can contribute. That is, from the formula ∫_^2 ×^2 P_≈q z_J^a(x - h_1) P_≈qþ_(x - h_2) K_qa^jℓ(h_1,h_2) dh_1dh_2 we see that the biconvolution only translates each factor in physical space and therefore modulates in frequency space. The integral above will still be localized to frequencies below 2^q+2 since the Fourier transform maps products to convolutions. Therefore, we are able to bound P_q' _ _H^jℓ = ∑_q = q' - 3^∞P_q' K_qa^jℓ∗[ P_≈q __1 z_J^a, P_≈q __2 þ_] P_q' _ _H^jℓ ∑_q'-3^∞P_≈q __1 z_J __2 þ_ ∑_q'-3^∞2^-q __1 z_J __2 þ_ 2^-q' ^^(|| - )_+ Ξ^|| S_z e_u^1/2 (1+) 2^-q' ^^(|| - )_+ Ξ^|| (Ξe_u^1/2) S_r (1+). Proof for the Term, Part 2 (High-Low terms): Recall that the High-Low terms are defined as ^jℓ_HL = ∑_q P_≈ q_b^jℓ [ P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a]. Taking _ derivatives, we get _^jℓ_HL = ∑_q P_≈ q_b^jℓ [_ (P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a)]. Since P_≈ q_b_op 2^-q, we have _^jℓ_HL ≲∑_q 2^-q_ (P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a) ≲∑_q 2^-q(__1 P_q+1_a T^b[z_J^a]__2 P_≤ q-1þ_ + ∑_q 2^-q__1 P_q+1_a T^b[þ_]__2 P_≤ q-1 z_J^a). We can bound __2 P_≤ q-1þ_ by ^(|_2|-L)_+Ξ^|_2|e_u^1/2 using (<ref>). For the other terms, we split the sum into q< and q≥. For q<, we have __1 P_q+1_a T^b[z_J^a] ≲P_q+1_a T^b_op__1 z_J^a ≲ 2^q ^(|_1|-)_+Ξ^|_1|S_z(1+), Summing over q< yields a bound of ^(||-)_+Ξ^||+1e_u^1/2S_r(1+)log. For q≥, we use the Ċ^α norm in the definition of to get __1 P_q+1_a T^b[z_J^a] ≲ P_≈ q_a P_q+1 T^b [__1 z^a_J] _C^0 ≲ 2^q 2^-α q__1 z^a_J _Ċ^α ≲ 2^q 2^-α q^αΞ^|_1|^(|_1|-)_+S_z(1+(t)) Summing over q≥ yields ^(||-)_+Ξ^||+1e_u^1/2S_r(1+). Combining the two cases, we obtain the desired bound (<ref>)^(||-)_+Ξ^||+1e_u^1/2S_r(1+) ^(||-)_+Ξ^||τ^-1S_r(1+). For the term with þ_, __1 P_q+1_a T^b[þ_] Ξ^|_1|+1e_u^1/2^(|_1|+1-L)_+. Thus the bound on this term is ∑_q [2^-q]__1 P_q+1_a T^b[þ_][__2P_≤ q-1 z_0] ∑_q[2^-q] [Ξ^|_1|+1e_u^1/2^(|_1|+1-L)_+] [__2P_≤ q-1 z_0] ∑_q[2^-q] [Ξ^|_1|+1e_u^1/2^(|_1|+1-L)_+] [Ξ^|_2|^(|_2|-)_+ S_z (1+(t)) ] [Ξ^|_1|+1e_u^1/2^(|_1|+1-L)_+] [Ξ^|_2|^(|_2|-)_+ S_z (1+(t)) ] τ^-1Ξ^||^(||-)_+ S_z(1+(t)) This completes the proof of the C^0 bound on __HL. The bound for _LH and __LH follows similarly. We now prove the frequency-localized bounds. Applying P_q'_ for q' > q̂, we get P_q'_^jℓ_HL = ∑_q P_q'_ (P_≈ q_b^jℓ [ P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a]) = ∑_|q'-q|≤ 5 P_q' P_≈ q_b^jℓ [_ (P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a)]. We obtain P_q'_^jℓ_HL ≲∑_|q'-q|≤ 5 2^-q_ (P_q+1_a T^b[z_J^a] P_≤ q-1þ_ - P_q+1_a T^b[þ_] P_≤ q-1 z_J^a) ≲∑_|q'-q|≤ 5 2^-q(__1 P_q+1_a T^b[z_J^a]__2 P_≤ q-1þ_ +__1 P_q+1_a T^b[þ_]__2 P_≤ q-1 z_J^a). We have __1 P_q+1_a T^b[z_J^a] =P_q+1_a T^b[__1 z_J^a] P_q+1_a [__1 z_J^a] ≲ 2^-α q 2^q__1[z_J^a] ≲ 2^-α q2^q ^α^(|_1|-)_+Ξ^|_1|S_z(1+(t)), A similar calculation is done for __2P_≤ q-1 z^a_J. Thus P_q'_^jℓ_HL 2^-α q^ατ^-1^(||-)_+Ξ^||S_r(1+(t)) Again the LH term follows along similar lines. Now that we have estimated all of F_J = {w̅_J, z̅_J, r̅_J }, Proposition <ref> guarantees that (t) 1 is bounded. We may now use this bound in the estimates that follow. Using (<ref>) from Lemma <ref> we have _ u_J = _ T w_J ≤[_,]u_J + _ T w_J The first term is bounded by [_,]u_J __1 u_ __2 _i u_J 1_|_2| ≤|| - 1 1_|_1| ≥1 [^(|_1| - 1 + 1 - )_+ Ξ^|_1| e_u^1/2] [^(|_2| + 1 - )_+ Ξ^|_2| S_u (1+)] 1_|_1| ≥1 ^(|| - )_+ Ξ^|| S_u (1+) For the second term, we decompose w_J = w̅_J + w̃_J. By Lemma <ref> for w̅_J, the w̅_J part is bounded by T _w̅_J + O(^(|| -)_+Ξ^||+1 e_u^1/2 S_w (t)) ≲_w̅_J + ^-α_w̅_J + O(^(|| -)_+Ξ^||+1 e_u^1/2 S_w (t)) ≲^(|| -)_+Ξ^||τ^-1 S_w (1 + (t)) + ^-αsup_q > 2^α qP_q _w̅_J + O(^(|| -)_+Ξ^||+1 e_u^1/2 S_w (t)) ≲^(|| -)_+Ξ^||τ^-1 S_w (1 + (t)) + ^-α^(α-1)^(||+1 -)_+Ξ^||+1μ S_w (1 + (t)) ≲^(|| -)_+Ξ^||μ S_w (1 + (t)) = ^(|| -)_+Ξ^||τ^-1 S_u (1 + (t)) where in the third line we used Lemma <ref> again to bound T _w̅_J, in the fourth line we used our proof of Proposition <ref> for w_J to bound _w̅_J and Lemma <ref> to control _w̅_J. In the fifth line we used the proof of Proposition <ref> again to bound P_q ∇_w̅_J. Recall now that w̃_J = μ^-1 h_f[J](μ t) _j _ℓ A_J^jℓ. We have _ T^ℓw̃_J = I + II I = h_f[J]'(μt) T^ℓ_i _j _ A_J^ij II = μ^-1 h_f[J](μt) _ [T^ℓ_i _j A_J^ij ] The term I is the main term since here the advective derivative costs a factor of μ. We bound it by I T^ℓ_i _j P_≤q̅ _ A_J^ij + ∑_q ≥q̅ 2^-q _j _i _ A_J^ij Ξ^2 [^(|| - L)_+ Ξ^|| D_R,n] + Ξ^- ^(|| + 3 + - L)_+ Ξ^|| + D_R,n Ξ^2 ^(|| + 1 - )_+ Ξ^|| D_R,n ≤^(|| + 1 - )_+ Ξ^|| (Ξe_u^1/2) e_u^1/2 The term II can be bounded by II μ^-1 _ [, T^ℓ] _j _i A_J^ji + μ^-1 _ T^ℓ_i _j A_J^ij The second term is bounded by μ^-1 _ T^ℓ_i _j A_J^ij μ^-1 log _ _i _j A_J^ij + μ^-1 ^- _ _i _j A_J^ij μ^-1 log^(|| + 3 - L)_+ Ξ^|| + 2 D_R,n ^(|| - )_+ Ξ^|| (Ξe_u^1/2) e_u^1/2 where we recall μ = N^1/2Ξ^3/2 D_R^1/2 and (<ref>) to get the last estimate. The first term with the commutator can be bounded by the same quantity by the argument of Lemma <ref>. We omit the details. Recall the following bounds on r_J from Proposition <ref>: _ r_J ≲_ N^(|| -)_+ Ξ^|| S_r, For ρ_J^jℓ = ^jℓ_ab r_J^ab, we thus obtain _ ρ_J (log)_r_J_0+^-_r_J_Ċ^ (log)^(||-)_+ Ξ^||S_r+^- ^ S_r ^(||-)_+ Ξ^|| ^(||-)_+Ξ^||S_ρ(1+) ^(||-)_+Ξ^||S_ρ. To bound the advective derivative of ρ_J with a cost of τ^-1 rather than μ we must examine the evolution equation for r_J. The crucial point is that the forcing term A_J^jℓ vanishes on the support of χ̃'. Lemma <ref> gives _ ρ_J^jℓ = ^jℓ _a _b r_J^ab + (log) O(^(|| - )_+ Ξ^||+1 e_u^1/2 S_r) The term in the O(·) is acceptable since τ^-1∼logΞ e_u^1/2 and S_ρ = (log ) S_r. For the first term, let us define the order zero operators ^jℓ_ab = ^jℓ_a _b and also ^jℓ_cd = ^jℓ_ab^ab_c _d. We return to the equation for r_J to obain, for t ∈∂_tχ_k, _ ^jℓ_ab r_J^ab = _ ^jℓ_ab ^ab_c _d[ _e T^d þ_r_J^c e ] + _^jℓ_ab ^ab_c[ z_J^c, þ_] = ^jℓ_cd _ [_e T^d þ_r_J^c e ] + ^jℓ_ab _^ab_c[ z_J^c, þ_] Note that the latter equation has exactly the same form as the equation (<ref>) for r̅_J except for the additional zeroth order operator _ab^jℓ appearing in front of ^ab_c. Thus we can repeat the analysis that was done for r̅_J and use the inequality ^jℓ_ab _^ab_c[ z_J^c, þ_] log _^ab_c[ z_J^c, þ_] + ^- sup_q ≥ 2^q P_q _^ab_c[ z_J^c, þ_] . Doing so we conclude that the estimate for _ρ_J on χ̃'_k is the same as the estimate for _r̅_J, but with a loss of one power of log that comes from the presence of the additional zeroth order operator in front of _c^ab in (<ref>). We omit the remaining details. Let us now conclude the proof of Proposition <ref>. Let 0 ≤ || ≤ 1. Then _||^-1/2 w_J ∑_q ≤q̅ ||^-1/2 P_q _i _ z_J^i + ∑_q ≥q̅ ||^-1/2 P_q _ w_J ∑_q ≤q̅ 2^q/2 _ z_J + ∑_q ≥q̅ 2^-q/2 _ w_J Ξ^1/2 Ξ^|| [Ξ^-1 S_w] + Ξ^-1/2 Ξ^|| S_w ∼Ξ^-1/2 Ξ^|| S_w Ξ^|| D_R,n^1/2. The desired bound for w_n = ∑_k,nχ̃ w_k,n now follows. We first prove (<ref>). We have R_(n+1) = ∑_k χ̃_k'(t) ρ_(k,n) where ρ is a trace-free double anti-divergence of w_(k,n). We have R_(n+1)_0 ≤C/τmax_k ρ_(k,n)_0 ≤C/τ S_ρ = C/τ (log) μ^-1 D_R,n. This is ≤ D_R,n+1 if and only if (C/b)^2 N^4ηΞ^4ηe_u/e_R≤ N, which holds by the hypothesis (<ref>) in the Main Lemma. This proves (<ref>). The proof of (<ref>) follows from (<ref>) with r=0. The proof of (<ref>) follows from (<ref>) with r=1. §.§ Errors after the Newton step Upon completing the Newton step we have new errors described by (<ref>)-(<ref>). The error term in (<ref>) has already been estimated. Let us now estimate the terms in (<ref>)-(<ref>). We introduce the notation R_M,n and R_Q,n to denote special solutions to _j_ℓR_M,n^jℓ = T^ℓ(þ- þ_) _ℓw_n + T^ℓw_n _ℓ(þ- þ_) _j _ℓR_Q,n^jℓ = T^ℓw_n _ℓw_n + ∑_j = 1^n-1 (T^ℓw_n _ℓw_j + T^ℓw_j _ℓw_n) For 0 ≤ || + r ≤ L and 0≤ r≤ 1 _ _t^r ( þ- þ_) + _ _t^r ( u - u_) ≲(N Ξ)^|| (Ξ e_u^1/2)^r N^-1 e_u^1/2, Furthermore, for all 0 ≤ r ≤ 1 and all one has _ _t^r w_n + Ξ_ _t^r z_n (N Ξ)^|| μ^r S_w. Recall that þ_ = P_≤ q_þ and u_^ℓ = T^ℓþ_, where q_ is chosen such that 2^q_∼^-1 = N^1/LΞ. We begin by estimating the difference þ - þ_: þ - þ_ = þ - P_≤ q_þ = P_> q_þ. Using the Littlewood-Paley characterization of Hölder norms and the frequency energy level estimates, When we bound θ - θ_ in C^0, we need to be very precise and use the fact that the moments ∫ h^a⃗η_(h) dh = 0 all vanish for 0 < |a| ≤ L. This implies θ - θ_ϵ_0 ≲ϵ^L ∇^L θ_0 ≲e_u^1/2N. Now we move on to the |a| ≥ 1 case. Now we consider 1 ≤ || + r ≤ L. We use a trivial bound of _ _t^r (þ- þ_) _0 ≤_ _t^r þ + _ _t^r þ_ ≲Ξ^|| (Ξe_u^1/2)^r e_u^1/2 = Ξ^|| + 3r/2 D_u^(r+1)/2 Our goal is to bound this expression by (N Ξ)^|| (Ξ e_u^1/2)^r D_u^1/2/N = (N Ξ)^|| (N Ξ)^3r/2 D_R^r/2 D_u^1/2 / N Thus we must check that for 1 ≤ || + r ≤ L we have (D_u / D_R )^r/2 N^3 r / 2 + || - 1 = N^r/2 + ( r + || - 1) This lower bound follows from (<ref>), which implies N ≥ D_u / D_R. The same proof applies to u since we have assumed the same bounds on u as for þ. To prove (<ref>), recall that w_n = ∑_k χ̃_k w_k,n, with w_k,n = w̃_k,n + w̅_k,n, each of size bounded by S_w. In order to estimate __t^r w_k,n, it suffices to observe that: * Taking a spatial derivative never costs more than Ξ≤ N Ξ. * Taking an advective derivative of w̅_k,n costs at most τ^-1. * Taking a pure time derivative _t = - u_· of either w̃_k,n or w̅_k,n costs at most μ. Similar considerations hold for z_k,n, which has size S_z = Ξ^-1 S_w. For appropriately chosen R_M,n and R_Q,n, we have the estimates ∇_a⃗ R_M,n_0 ≲(N Ξ)^|| N^-1 D_R _ R_Q,n ≲(NΞ)^|| N^-1 D_R = (NΞ)^|| S_w^2 Ξ^-1 for 0 ≤ || ≤ L and _ _t R_M,n ≲(N Ξ)^|| τ^-1 N^-1 D_R _ _t R_Q,n ≲(NΞ)^|| τ^-1 N^-1 D_R = (NΞ)^|| τ^-1 S_w^2 Ξ^-1 for 0 ≤ || ≤ L-1. Here τ^-1 = (N Ξ)^3/2 D_R^1/2, and S_w is as documented in the table (<ref>), S_w = Ξ^2 μ^-1 D_R, μ = N^1/2Ξ^3/2 D_R^1/2. §.§ The quadratic terms R_Q,n. We begin by estimating an inverse double divergence of T^ℓ w_n ∇_ℓ w_n = ∇_ℓ(w_n T^ℓ w_n). It suffices to only estimate this term since the other terms in the equation for R_Q,n are similar. We must estimate a solution to ∇_j R^jℓ_Q,n = T^ℓ w_n w_n = ∑_q P_≤ q-1 w_J T^ℓ P_q+1 w_J + P_q+1 w_J P_≤ q-1 T^ℓ w_J + P_q+1 w_J T^ℓ P_q w_J + P_q w_J T^ℓ P_q+1 w_J + P_q+1 w_J T^ℓ P_q+1 w_J. Specifically, we achieve bounds for __t^rR_Q,n^ℓ = __t^r div^-1(w_n T^ℓ w_n). We decompose this as LH + HL + HH in the manner of (<ref>). For brevity, we omit n in the subscript. Terms R_QHL and R_QLH. The low-high terms are analogous to the high-low terms; thus, we concentrate our analysis on the latter. Its q'th frequency component is __t^r R_QHLq^jℓ = __t^r ℛ^jℓ_a P_≈q[P_q+1w_J T^a P_≤q-1w_J]. We select q̅ such that 2^q̅∼Ξ. Consider the case q ≤q̅. In this case we express w_J = ∇_i ∇_b r_J^ib in the rightmost copy of w_J and bound the operator norm of T P_≤ q-1∇∇. By doing so, we obtain: __t^r R_QHLq^jℓ = ℛ^jℓ_a P_≈q[ __1 _t^r_1 P_q+1 w_J __2 _t^r_2 T^a P_≤q-1 _i _b r_J^ib] (<ref>) ℛ^jℓ_a P_≈q __1 _t^r_1 P_q+1 w_J T^a P_≤q-1 _i _b __2 _t^r_2 r_J^ib 2^-q 2^2 q (N Ξ)^|_1| μ^r_1 S_w (N Ξ)^|_2| μ^r_2 S_r 2^q Ξ^-2 (N Ξ)^|| μ^r S_w^2 For high frequencies q ≥q̅, we first prove the preliminary bound _ _t^r P_≤q T^a w_J _ _t^r P_≤q̅ T^a w_J + ∑_q = q̅^∞ T^a P_q _ _t^r w_J P_≤q̅ T^a _i _b _ _t^r r_J^ib + ∑_q = q̅^∞2^-q _ _t^r w_J 2^2 q̅ Ξ^-2 (N Ξ)^|| μ^r S_w + 2^-q̅ Ξ(N Ξ)^|| μ^r S_w _ _t^r P_≤q T^a w_J (N Ξ)^|| μ^r S_w We now apply this estimate to (<ref>) ℛ^jℓ_a P_≈q __1 _t^r_1 P_q+1 w_J __2 _t^r_2 P_≤q-1 T^a w_J 2^-q [(N Ξ)^|_1| μ^r_1 S_w][(N Ξ)^|_2| μ^r_2 S_w] 2^-q (N Ξ)^|| μ^r S_w^2. Summing (<ref>) over q < q̅ and (<ref>) over q ≥q̅ yields (<ref>) for R_QHL. Term R_QHH. We decompose the high-high frequency interactions into three parts: those with the operators applied in the order P_q+1, P_q; those with the order reversed; and those involving both P_q+1. We begin with the third group of terms. We can consider the other two terms similarly, as a single group. Note that we don't consider them separately because we need to consider those two together in order to get an anti-divergence. For brevity, we only demonstrate the part with both operators being P_q+1. We need to bound __t^r K_q1^jℓ∗ [w_J,w_J] = __t^r ∫ P_q+1w_J(x-h_1) P_q+1 w_J(x-h_2) K_q1^jℓ(h_1,h_2) dh_1 dh_2. Note that we can distribute the derivatives inside the integral using the product rule. We first consider the case where q > q̅. (<ref>) K_q _L^1 __1_t^r_1 w_J __2_t^r_2 w_J 2^-q (N Ξ)^|| μ^r S_w^2 For q ≤q̅, we write w_J = _i z_J^i and integrate by parts to find K_q1^jℓ ∗[w_J,w_J] = ∫z_J^a(x - h_1) z_J^b(x - h_2) _a _b K_q^jℓ(h_1, h_2) dh_1 dh_2 _ _t^r (<ref>) ^2 K_q _L^1 __1 _t^r_1 z_J __2 _t^r_2 z_J 2^q (N Ξ)^|| μ^r S_z^2 ∼2^q Ξ^-2 (N Ξ)^|| μ^r S_w^2 Now we sum (<ref>) over q > q̅ and (<ref>) over q ≤q̅ to obtain (<ref>) for R_QHH. §.§ The mollification terms R_M,n. Recall that R_M,n solves ∇_j∇_ℓ R_M,n^jℓ = T^ℓ(þ - þ_) ∇_ℓ w_n + T^ℓ w_n ∇_ℓ(þ - þ_). Here, by definition, θ_ϵ := P_≤q̂θ. Thus θ-θ_ϵ only has frequencies above 2^q̂. The idea is to expand these terms and observe that every single one of the θ-θ_ϵ terms is of high frequency >2^q̂. Thus θ-θ_ϵ = P_>q̂θ. We have ∇_j R_M,n^jℓ = ∑_J(n)( (θ-θ_ϵ)T^ℓ w_J + w_J T^ℓ (θ-θ_ϵ) ) χ̃_k(t). For simplicity we write ∇_j R_M,n^jℓ≃(θ-θ_ϵ)T^ℓ w_J + w_J T^ℓ (θ-θ_ϵ) . From now on, we will suppress the χ̃_k(t) and summation notation. We have R^jℓ_M,n = R_MLH,n^jℓ + R_MHL,n^jℓ + R_MHH,n^jℓ. Taking spatial and time derivatives of the LH term, we have: ∂_t ∇_a⃗ R^jℓ_MLHq,n = ∼∑∂_t ^jℓ_a P_≈ q [∇_a⃗_1 P_≤ q-1 T^a P_>q̂θ∇_a⃗_2 P_q+1 w_J] = ∼∑^jℓ_a P_≈ q [∂_t ∇_a⃗_1 P_≤ q-1 T^a P_>q̂θ∇_a⃗_2 P_q+1 w_J] + ∼∑^jℓ_a P_≈ q [∇_a⃗_1 P_≤ q-1 T^a P_>q̂θ∇_a⃗_2 P_q+1∂_t w_J] Taking spatial and time derivatives of the HL term, we have: ∂_t ∇_a⃗ R^jℓ_MHLq,n = ∼∑∂_t ^jℓ_a P_≈ q [∇_a⃗_1P_q+1P_>q̂θ T^a P_≤ q-1∇_a⃗_2w_J] = ∼∑^jℓ_a P_≈ q [∂_t ∇_a⃗_1P_q+1P_>q̂θ T^a P_≤ q-1∇_a⃗_2(w_J)] + ∼∑^jℓ_a P_≈ q [∇_a⃗_1P_q+1P_>q̂θ T^a P_≤ q-1∂_t ∇_a⃗_2 w_J]. We can obtain a similar expression for the derivatives of R_MHHq,n, which for conciseness we omit. The term R_MHH. We have ∇_j R^jℓ_MHHq,n = P_q+1(θ - θ_ϵ) T^ℓ P_q w_J + P_q w_J T^ℓ P_q+1 (θ - θ_ϵ). We must treat both terms together (rather than only one of the two terms at a time), since there is no anti-divergence if these two terms are separated from each other. We have _t^r_R^jℓ_MHHq,n = ∼∑∫_t^r_1__1(θ(x-h_1) - θ_ϵ(x-h_1))_t^r_2__2 w_J(x-h_2) K_q^jℓ(h_1,h_2) dh_1 dh_2 =∼∑∫_t^r_1__1(P_≈ q[θ-θ_ϵ](x-h_1)) _t^r_2__2w_J(x-h_2) K_q^jℓ(h_1,h_2) dh_1 dh_2 We can bound each term as follows: 1. For the first term, we have: ∫∂_t ∇_a⃗_1(P_≈ q[θ-θ_ϵ]) ∇_a⃗_2 w_J K_q^jℓ dh_1 dh_2_0 ≲K_q^jℓ_1 ∂_t ∇_a⃗_1(P_≈ q[θ-θ_ϵ])_0 ∇_a⃗_2 w_J_0 ≲ 2^-q [(NΞ)^|_1| (Ξe_u^1/2) N^-1 e_u^1/2 ] Ξ^|a⃗_2| S_w 2. For the second term, we have: ∫∇_a⃗_1(P_≈ q[θ-θ_ϵ]) ∂_t ∇_a⃗_2 w_J K_q^jℓ dh_1 dh_2_0 ≲K_q^jℓ_1 ∇_a⃗_1(P_≈ q[θ-θ_ϵ])_0 ∂_t ∇_a⃗_2 w_J_0 ≲ [2^-q] [(NΞ)^|a⃗_1|N^-1 e_u^1/2] [Ξ^|a⃗_2|μ S_w] The sum of these terms is bounded by [2^-q] [(NΞ)^|a⃗_1|N^-1 e_u^1/2] [Ξ^|a⃗_2| S_w] where is the inverse timescale := (N Ξ)^3/2 D_R^1/2. Now, summing over q > q̂-1, we get ∑ 2^-q∼^-1 and: ∂_t ∇_a⃗ R^jℓ_MHH,n∑_q ∂_t ∇_a⃗ R^jℓ_MHHq,n_0 ≲^-1 N^|_1|-1Ξ^|| e_u^1/2 S_w More generally, ∂_t^r ∇_a⃗ R^jℓ_MHH,n^-1τ^-r N^|_1|-1Ξ^|| e_u^1/2 S_w The terms R_MHL and R_MLH. Our first group of terms is R_MHL1,n^jℓ = ∑_q ≥q̂ - 1^jℓ_a P_≈ q[ P_≤ q-1 T^a(θ-θ_ϵ) P_q+1 w_J ]. As usual, we add a subscript q to label each term in the sum. So, for R_MHL1,n, we'll call the individual pieces R_MHL1q,n. R_MHL1q,n := ^jℓ_a P_≈ q[ P_≤ q-1 T^a(θ-θ_ϵ) P_q+1 w_J ] For 0 ≤ r + || ≤ L, we have ∂_t^r_R_MHL1q,n^jℓ_0 P_≈ q_op_t^r_1__1 P_≤ q-1 T (θ-θ_ϵ)_0 _t^r_2__2 P_q+1 w_J_0 [2^-q] [τ^-r_1(NΞ)^|_1|N^-1e_u^1/2] [μ^r_2Ξ^|_2|S_w] Thus ∂_t^r_R_MHL1,n^jℓ_0 ^-1τ^-r_1N^|_1|-1Ξ^|| e_u^1/2μ^r_2 S_w We would like this to be bounded by CD_R/N, which is indeed the case. One can check this by recalling that S_w = μ^-1Ξ^2 D_R = N^-1/2Ξ e_R^-1/2 D_R. The bounds for _t^r _ R^jℓ_MLHq,n are similar to the bounds for _t^r _ R^jℓ_MHLq,n. § CONVEX INTEGRATION Define the index set ℐ := F ××{1, …, }. Each I ∈ has the form I = (f, k, n). Set = ⌈ N Ξ⌉. The oscillatory wave has the form = ∑_I _I, _I = g_[f,k,n](μt) P_I [ e^i ξ_I þ_I ] þ_I = ^1/2 _I, _(f,k,n) = χ_k e_n^1/2(t) _f(p_I) p̌_I = (M^jℓ - R_(n)^jℓM_e D_R, ξ̌_k) p_I = (M^jℓ - R̃_(n)^jℓM_e D_R, ξ_k) where P_I is a frequency localization operator whose symbol is a bump function adapted to the region {ξ :  | ξ - f | ≤/100 }. Each wave has a conjugate wave I̅ with _I̅ = _I and ξ_I̅ = - ξ_I. We will use mollification to define R_(n). We postpone for now the necessary estimates on R_(n) and ξ_I that ensure the construction is well-defined. In particular, we will have to show that R_(n) and ξ_I do not escape the domains of _f and B^jℓ. Notice that, by construction and the disjointness of supports of the functions g_[f,k,n], we have the crucial disjointness property _I ∩_J = ∅ I ∉{J, J̅ } Now let þ̃_ = þ_ + w = þ_ + ∑_n=1^ w_n and ũ_^ℓ = T^ℓþ̃_ = u_^ℓ + T^ℓ w. We obtain the following estimates for ũ_. _ ũ_ ≲_ ^(|| - )_+ Ξ^|| e_u^1/2, _ ũ_ ≲_ ^(|| + 1 - )_+ Ξ^|| (Ξe_u^1/2) e_u^1/2, Notice that these are the same estimates that hold for u_ except that the losses of powers of occur earlier. These bounds follow from (<ref>). (More precisely, the correction to the velocity field also involves the time cutoffs χ̃_k.) We will also need a bound on the advective derivative of ũ_ along its own flow. Setting = _t + ũ_·, the following bound suffices: _ ũ_ ≲_ (Ξe_u^1/2) N^(|| + 1 - L)_+/L Ξ^|| e_u^1/2. This bound is a corollary of (<ref>)-(<ref>) and the following Lemma, which is generally useful when converting bounds between different time and advective derivatives. Let D_t be one of the operators D_t ∈{_t, , }. Consider any inverse timescale ζ≥Ξ e_u^1/2. Define the weighted norm of a smooth tensor field F by H_ζ[F] = max_0 ≤r ≤1 max_ 0 ≤|| + r ≤L' _ D_t^r F^(|| + r - )_+ Ξ^|| ζ^r If ζ is omitted in the notation, set H[F] = H_Ξ e_u^1/2. Then there exist constants depending only on L' such that H̃_ζ[F] ≲H̅_ζ[F] ≲ H^_t_ζ[F] ≲H̃_ζ[F] . Also, there is a product rule H_ζ[F G] ≲_L'H_ζ[F] H_ζ[G]. We show only that H^_t_ζ[F] ≲H̃_ζ[F] as the other directions are similar _ _t F = _ F - _[ ũ_^i _i F ] _ _t F ≲^(|| + 1 - )_+ Ξ^|| ζH̃[F] + ∑_|_1| + |_2| = || __1 ũ_ __2 _i F ≲^(|| + 1 - )_+ Ξ^|| ζH̃[F] + ∑_|_1| + |_2| = || ^(|_1| - )_+ Ξ^|_1| e_u^1/2 ^(|_2| + 1 - )_+ Ξ^|_2| + 1 H̃[F] We now apply the counting inequality (x - z)_+ + (y-z)_+ ≤ (x + y - z)_+, x, y, z ≥ 0 with x = |_1|, y = |_2| + 1, z = ≥ 0, and recall ζ≥Ξ e_u^1/2, to obtain _ _t F ≲^(|| + 1 - )_+ Ξ^|| ζH̃[F], which is the desired estimate after dividing through by the prefactor of H̃[F]. We will also use the following chain rule and product rule Consider the operators D_t ∈{_t, , } and let F be C^∞. Let G be a C^∞ function defined on a compact neighborhood of the image of F. Then H_ζ[G(F)] (1 + H_ζ[F])^L' H_ζ[F_1 F_2] H_ζ[F_1]H_ζ[F_2] with implicit constants depending on L'. We compute for 0 ≤ r ≤ 1, 0 ≤ r + || ≤ L' _ D_t^r G(F) = ∑_k = 0^|| + r ∼∑ ^k G(F) ∏_i=0^k __i D_t^r_i F where the sum is over appropriate indices such that ∑_i |_i| = || and ∑_i r_i = r. Then _ D_t^r G(F) ∑_k = 0^|| + r ^k G ∏_i=0^k __i D_t^r_i F ∑_k = 0^|| + r ∏_i=0^k [^(|_i| + r_i - )_+ Ξ^|_i| ζ^r_i H_ζ[F] ] ^(|| + r - )_+ Ξ^|| ζ^r (1 + H_ζ[F])^L', which is the desired bound. The product rule can be proven by direct computation, but it can also be deduced from the Chain Rule as follows. The vector-valued function (F_1H_ζ[F_1], F_2H_ζ[F_2]) takes values in { (u,v) : max{u , v }≤ 1 } and G(u,v) = uv is smooth in a compact neighborhood of this set. We then have by the chain rule H_ζ[F_1 F_2] = H_ζ[F_1] H_ζ[F_2] H_ζ[F_1H_ζ[F_1] F_2H_ζ[F_2] ] H_ζ[F_1] H_ζ[F_2] ( 1 + H_ζ[F_1H_ζ[F_1]] + H_ζ[F_2H_ζ[F_2]] )^L' H_ζ[F_1] H_ζ[F_2]. For D_t ∈{_t, , } define the prime weighted norm H'_ζ[F] = max_0 ≤r ≤1 max_0 ≤|| + r ≤L' _D_t F^(|| + r- (- 1))_+ Ξ^||ζ^r Then the natural analogues of Lemma <ref> and Proposition <ref> hold for the prime weighted norms as well. We omit the proof, which is essentially the same as that of Lemma <ref> and Proposition <ref>. We are now ready to define R̃_(n). We choose to do this by mollification along the flow rather than a standard mollification in time so that we will be able to borrow estimates that have already been established. (The other benefit of mollifying along the flow is that it would apply to 2D Euler and to the mSQG equation.) Choose the time scale _t = (Ξe_u^1/2)^-1 (D_u/D_R)^-1/2 N^-1/2 and set R̃_(n) = η__t ∗_ΦR_= ∫R_(Φ_s(t,x)) η__t(s) ds where η__t(s) = _t^-1η(s/_t) is a standard mollifying kernel supported in |s| < _t and where Φ_s(t) is the flow map of _t + ũ_·, which is the unique solution to Φ_s(t,x) = (t+s, Φ_s^i(t,x)), i = 1, 2 dds Φ_s^i = ũ_^i(Φ_s(t,x)) i = 1, 2 Φ_0(t,x) = (t,x). The estimates we inherit from this construction are (see <cit.>) R_(n)-R̃_n ≲_t R_(n) ≲N^-1/2 (D_u/D_R)^-1/2 D_R _R̃_n ≲_ ^(|| -L)_+ Ξ^|| D_R _ R̃_n ≲_ (Ξe_u^1/2) ^(|| + 1 - L)_+ Ξ^|| D_R _ ^2 R̃_n ≲_ _t^-1 (Ξe_u^1/2) ^(|| + 1 - L)_+ Ξ^|| D_R We define the phase functions ξ_I to solve (_t + ũ_^j _j ) ξ_I = 0 ξ_(f,k,n)(kτ, x) = ξ̌_(f,k)(kτ, x) = f ·x Notice that ξ_I and ξ̌_I have the same initial data but differ in terms of which vector field transports them. We obtain the following estimates for ξ_I: The phase functions satisfy the following bounds on the interval [t(I) - τ, t(I) + τ] _ ξ_I ≲_ ^(|| + 1 -)_+ Ξ^||, _ ξ_I ≲_ ^(|| + 1 -)_+ Ξ^|| (Ξe_u^1/2), _ ^2 ξ_I ≲_ ^(|| + 2 -)_+ Ξ^|| (Ξe_u^1/2)^2 A proof (based on Gronwall's inequality for a weighted norm) can be found in <cit.>. We will need a good estimate on how close the phase gradients are to those that were used in the Newton step. The equation we need to analyze is (_t + u_^j _j) (ξ̌_J - ξ_J) = T^j w _j ξ_J (_t + u_^j _j) (_a ξ̌_J - _a ξ_J) = - _a(T^j w _j ξ_J) - _a u_^j _j(ξ̌_J - ξ_J) Again, the initial data for ξ̌_J and ξ_J are equal at time t(I). From this equation, we use the fact that the time scale τ≤ (log)^-1 (Ξ e_u^1/2)^-1, and apply the method of characteristics and Gronwall to obtain ξ̌(t) - ξ_I(t) _0 ≤(Ξe_u^1/2) ∫_t(I)^t ξ̌(s) - ξ_I(s) _0 ds + τ( (T^j w _j ξ_J) ) ξ̌(t) - ξ_I(t) _0 ≲e^C Ξe_u^1/2 τ τΞS_u ≲N^-1/2 (_u/_R)^-1/2. In particular, if N ≥Ĉ is large enough we have that ξ_I take values in the domain of the functions _f and B^jℓ, so that the construction of the convex integration wave will be well-defined. We obtain the following bounds on the amplitudes _I defined in (<ref>) stated in the following Proposition _ ^r _I ≲^(|| + 1 - )_+ Ξ^|| τ^-r D_R^1/2 0 ≤r ≤1 _ ^2 _I ≲^(|| + 2 - )_+ Ξ^|| _t^-1 (Ξe_u^1/2 ) D_R^1/2 We only sketch the main idea in the proof since the full proof is a by now standard exercise in the chain rule and product rule using Propositions <ref> and <ref>. Consider the case || = 0. Define Ř_n = R̃_n/(M_e D_R) so that Ř_n has size 1. By abuse of notation, we think of _f as a function of Ř_n and ξ_k. _(f,k,n) = χ_k e_n^1/2(t) _f( Ř_n, ξ_k) ^r _(f,k,n) = ∼∑ _t^r_1χ_k _t^r_2 e_n^1/2 [ _f ^r_3 Ř_n + _f ^r_3 ξ_I + ] To estimate _(f,k,n) note that the cutoff has size 1, e_n^1/2(t) has size D_R,n^1/2≤ D_R^1/2, and _f(Ř, ξ) has size 1. Upon taking || spatial derivatives, the factor of ^(|| + 1 - )_+ appears when all derivatives hit ξ. Now consider the case of r = 1 advective derivatives. The first advective derivative costs Ξ e_u^1/2 when it hits Ř or ξ_I, but carries a larger cost of τ^-1 when it hits χ_k(t) or e_n^1/2(t) from the Newton step, hence the estimate (<ref>). On the other hand, upon taking r = 2 advective derivatives, the largest term in (<ref>) comes from ^2 Ř_t^-1Ξ e_u^1/2. Indeed, for the other terms the advective derivatives cost at most τ^-1 each and τ^-2 = b^-2 (log)^2 (Ξe_u^1/2)^2 (Ξe_u^1/2)^2 (D_u/D_R)^1/2 N^1/2 = _t^-1 (Ξe_u^1/2) As for the spatial derivatives, note that factors of appear only after Ř or ξ_I or ξ_I have been differentiated - 1 times, or after ^2 ξ_I has been differentiated - 2 times. Having estimated the phase functions we can expand out the wave _I using the Microlocal Lemma from <cit.>, which shows via a Taylor expansion that the high frequency convolution operator P_I in the definition of _I and the convolution operator T^ℓ P_I in the definition of T^ℓ_I both act to leading order like multiplication operators. _I = g_[I](μt) e^i ξ_I ( þ_I + þ_I ) T^ℓ_I = g_[I](μt) e^i ξ_I ( u_I^ℓ+ u_I^ℓ) u_I^ℓ = m^ℓ(ξ_I) þ_I The estimates we inherit for the lower order term þ_I and u_I mimic those of þ_I ∼^-1 [þ_I + þ_I ^2 ξ_I ] u_I ∼^-1 [u_I + u_I ^2 ξ_I ]. In particular, they gain a smallness factor of N^-1. The calculation in <cit.> gives _ ^r þ_I + _ ^r u_I ≲^1/2 N^-1 ^(|| + 1 + r - )_+ Ξ^|| (Ξe_u^1/2)^r D_R^1/2, for 0 ≤ r ≤ 1, and _ ^2 þ_I + _ ^2 u_I ≲^1/2 N^-1 ^(|| + 2 - )_+ Ξ^|| (Ξe_u^1/2) _t^-1 D_R^1/2. § ESTIMATING THE CORRECTIONS Here we gather estimates for the corrections _I = g_[I](μt) P_I[ e^i ξ_I ^1/2 _I ] T^ℓ_I = g_[I](μt) T^ℓP_I[ e^i ξ_I ^1/2 _I ] Since g_[I] has size 1 and both P_I 1 and T^ℓ P_I 1, we immediately obtain from (<ref>) that _I + T^ℓ_I ^1/2 _I ^1/2 D_R^1/2 Since P_I = P_≈ P_I and T^ℓ P_I = P_≈ T^ℓ P_I both localize to frequency , this bound implies __I + _ T^ℓ_I _ ^||+1/2 D_R^1/2 Writing ||^-1/2 P_I = [||^-1/2 P_≈] P_I, where ||^-1/2 P_≈^-1/2, we also obtain _ ||^-1/2 _I _ ^|| D_R^1/2, which finishes the verification of the claim (<ref>). Now define the vector field u^ℓ = ũ^ℓ + (u^ℓ - u_^ℓ) + T^ℓ and the associated advective derivative D_t = _t + u · For D_t ∈{_t, , , D_t } define the final weighted norm H^*[F] = max_0 ≤r ≤1 max_0 ≤|| + r ≤L _ D_t^r F (NΞ)^|| τ^-r where we recall τ^-1 = (N Ξ)^3/2 D_R. The final weighted norms are comparable up to implicit constants H^*[F] H^*[F] H̅^*[F] H̃^*[F] H^*[F] Furthermore there is a product rule H^*[F G] H^*[F] H^*[G]. Since all the inequalities are proven similarly, we only give the proof of H^*[F] H^*[F], which contains all the needed ideas. We have, for 0 ≤ || ≤ L - 1, _ D_t F = _ _t F + _ ( ũ^i _i F ) + _[( u^i - u_^i) _i F] + _[T^i _i F] _ D_t F_ _t F + ∼∑ ( __1 ũ + __1 (u - u_) + __1 T ) __2 F 1_|_2| < |_1| (N Ξ)^|| τ^-1 H^*[F] + ∼∑ (N Ξ)^|_1| ( e_u^1/2 + e_u^1/2N + (N Ξ)^1/2 D_R^1/2 ) (N Ξ) ^|_2| + 1 H^*[F] (N Ξ)^|| τ^-1 H^*[F] In the last line we used (<ref>), Lemma <ref>, Lemma <ref> and (<ref>). We obtain the following estimate for the new velocity field H^*[u] (N Ξ)^1/2 D_R^1/2 H^*[þ] (N Ξ)^1/2 D_R^1/2 We have H^*[u] ≤H^*[ũ] + + H^*[u - u_] + H^*[T[]]. By Proposition <ref> it suffices to bound _ ^r u_ Ξ^|| (Ξe_u^1/2)^r e_u^1/2 _ _t^r ( u - u_) (N Ξ)^|| τ^-r (e_u^1/2 / N ) _ _t^r T[] (N Ξ)^|| τ^-r ( N Ξ)^1/2 D_R^1/2 since the right hand side of each of these inequalities is bounded by the right hand side of (<ref>), which is our goal estimate. The first of these bounds follows from (<ref>), the second from Lemma <ref>, and the third from the following calculation, which establishes the case || = 0: _t T^ℓ[þ] = _t T^ℓP_I[ g_[I](μt) e^i ξ_I þ_I ] _t T[þ] ∼∑ μ^r_1 [_tξ_I]^r_2 _t^r_3 þ_I (<ref>) ∼∑ μ^r_1 [ ũ ξ_I ]^r_2 τ^-r_3 H^*[þ_I] ∼∑ μ^r_1 [ ũ ξ_I ]^r_2 τ^-r_3 H^*[þ_I] (<ref>),(<ref>),(<ref>) ∼∑ μ^r_1 [ (N Ξ) (ΞD_u)^1/2 ]^r_2 τ^-r_3 H̃^*[þ_I] (<ref>) ∼∑ τ^-r_1 τ^-r_2 τ^-r_3 H̃^*[þ_I] τ^-1 H̃^*[þ_I] (<ref>) τ^-1 ^1/2 D_R^1/2 Our desired bound on __t T[] follows from the fact that the operator T P_I = P_≈ T P_I localizes to frequency . The bounds for þ follow the same argument, but are easier as the operator T is not involved. § THE ERROR TERMS IN THE CONVEX INTEGRATION STEP Recall that prior to the convex integration we have _t þ_+ T^j þ__j þ_ = _j _ℓ[S_()^jℓ + P_()^jℓ + R_()^jℓ] S_() = - ∑_I = (f,k,n) g_[I]^2(μt) e_n(t) χ_k^2 _f^2( p̌_I) B^jℓ(ξ̌_k) The term P_() is the “acceptable” error from the Newton steps. When we construct þ = þ_ +, we get the following error terms _t þ + u^j _j þ = _j _ℓR^jℓ R^jℓ = R_T^jℓ + R_H^jℓ + R_M^jℓ + R_S^jℓ + P_()^jℓ + R_()^jℓ _j _ℓR_T^jℓ = _t + ũ_^a _a + T^a _a þ̃_ _j _ℓR_H^jℓ = ∑_I T^a _I _a _I _j _ℓR_M^jℓ = T^j[(þ- þ_)] _j + T^j _j (þ- þ_) _j _ℓR_S^jℓ = ∑_I T^j _I _j _I̅ + T^j _I̅ _j _I - _j _ℓ[ g_[I]^2 e_n(t) χ_k^2 _f^2(p̌_I) B^jℓ(ξ̌_I)/2 ] Note that there are no terms where _I interacts with _J for J ∉{ I, I̅}. This is the case thanks to (<ref>). The fact that self-interaction terms such as (<ref>) are well-controlled was first observed in <cit.>. The term R_S is the “flow error”. Using the divergence form principle of Section <ref>, we can write T^j _I _j _I̅ + T^j _I̅ _j _I = _j [ T^j _I _I̅ + T^j _I̅ _I] = _j _ℓK_^jℓ∗[_I, _I̅] where K_^jℓ is a specific trace free kernel. According to the bilinear microlocal lemma of <cit.>, we can express the action of a frequency-localized bilinear convolution kernel on two high frequency inputs as being K_^jℓ ∗[_I, _I̅] = K_^jℓ(ξ_I, -ξ_I) |þ_I|^2 + B_I^jℓ where B_I^jℓ is an explicit error term. From the derivation of K_^jℓ in frequency space (see Appendix Section <ref>), we have that K_^jℓ( p, - p) = ^-1 B^jℓ(p) for p in an O(1) neighborhood of the initial data for ξ_I, where B^jℓ(p) = -i(^j m^ℓ(p) + ^ℓm^j(p)), and m^ℓ(p) = i ^ℓ a p_a |p|^-1 is the SQG multiplier. Putting these together, we arrive at the following expression for the conjugate interactions: T^j _I _j _I̅ + T^j _I̅ _j _I = _j _ℓ[ g_[I]^2(μt)[ e_n(t) χ_k^2 _f^2(p_I) B^jℓ(ξ_I) + B_I^jℓ ] ], where B_I^jℓ has already been estimated in <cit.> (in particular, it has size D_R/N). We can then write the R_S term as R_S^jℓ = ∑_I g_[I]^2(μt)[ e_n(t) χ_k^2 (_f^2(p_I) B^jℓ(ξ_I)- _f^2(p̌_I) B^jℓ(ξ̌_I)) ]/2 + B_I^jℓ ] = ∑_I (R_SI^jℓ + g_[I]^2(μt) B_I^jℓ) We bound this error using (<ref>) and our other estimates for the construction components. § ESTIMATING R_S We now begin our estimates on the stress errors. We rely the following propositions: Let D_t ∈{_t, , , D_t } and F be smooth. Let G be a C^∞ function defined on a compact neighborhood of the image of F. Then H^*[G(F)] (1 + H^*[F])^L We compute for 0 ≤ r ≤ 1, 0 ≤ r + || ≤ L _ D_t^r G(F) = ∑_k = 0^|| + r ∼∑ ^k G(F) ∏_i=0^k __i D_t^r_i F where the sum is over appropriate indices such that ∑_i |_i| = || and ∑_i r_i = r. Then _ D_t^r G(F) ∑_k = 0^|| + r ^k G ∏_i=0^k __i D_t^r_i F ∑_k = 0^|| + r ∏_i=0^k[ (N Ξ)^|_i| τ^-r_i H[F] ] ( N Ξ)^|| τ^-r (1 + H[F])^|| + r, which is the desired bound. For the following proposition, recall that there exists a ball of radius K about (0, (2,1), (1,2)) such that the range of (R_(n)/(M_e D_R,n), ξ̌_I) and also the range of (R̃_(n)/(M_e D_R,n), ξ̌_I) are guaranteed to lie in this ball. Let G be a C^∞ function defined on the closed ball of radius K about (0, (2,1), (1,2)). Then H̅^*[G(R_(n)/(M_e D_R,n), ξ̌_I) - G(R̃_(n)/(M_e D_R,n), ξ̌_I) ] (D_u/D_R)^-1/2 N^-1/2 The C^0 bound is given by G(R_(n)/D_R, ξ̌_I) - G(R̃_(n)/D_R, ξ̌_I) G [R_(n) - R̃_(n) / D_R,n + ξ_I - ξ̌_I ] (<ref>),(<ref>) 1 ·[ (D_u/D_R)^-1/2 N^-1/2 ] For 1 ≤ || + r ≤ L we apply the triangle inequality, the comparability of weighted norms, and the chain rule for weighted norms __t^r [ G(R_(n)/D_R, ξ̌_I) - G(R̃_(n)/D_R, ξ̌_I) ] ^(|| + r +1 - )_+Ξ^|| (Ξe_u^1/2)^r ( H'[ G(R_(n)/D_R, ξ̌_I)] + H'[G(R̃_(n)/D_R, ξ̌_I) ] ) ^(|| + r + 1- )_+Ξ^|| (Ξe_u^1/2)^r ( H̅'[ G(R_(n)/D_R, ξ̌_I)] + H̃'[G(R̃_(n)/D_R, ξ̌_I) ] ) ^(|| + r + 1 - )_+Ξ^|| (Ξe_u^1/2)^r ( 1 + H̅'[ R_(n)/D_R, ξ̌_I] + H̃'[R̃_(n)/D_R, ξ̌_I] )^L ^(|| + r + 1 - )_+Ξ^|| (Ξe_u^1/2)^r ·1 = ^(|| + r +1- )_+ Ξ^|| (Ξ^3/2 D_u^1/2)^r To confirm (<ref>), the right hand side must be bounded by (N Ξ)^|| [ (N Ξ)^3/2 D_R^1/2]^r (D_u / D_R)^-1/2 N^-1/2 = N^(|| + r - 1) + r2 + 12 (D_u / D_R)^-r2 - 12 Ξ^|| (Ξ^3/2 D_u^1/2)^r. This bound now follows from N ≥ D_u / D_R and (|| + r - 1) ≥ (|| + r + 1 - )_+ (since = L -3 ≥ 4). We now estimate R_SI with the product rule and Proposition (<ref>) to obtain H^*[R_SI] H^*[g_[I]^2(μt)] H^*[e_n(t)] · H^*[_f^2(R̃_(n)/D_R, ξ_I) B^jℓ(ξ_I) - _f^2(R_(n)/D_R, ξ̌_I) B^jℓ(ξ̌_I) ] 1 ·D_R ·(D_u/D_R)^-1/2 N^-1/2. The bounds proved in <cit.> give for 0 ≤ r ≤ 1 imply that _ ^r B_I _ (N Ξ)^|| N^-1 ( Ξe_u^1/2)^r D_R. (Note that in our context we choose B_ = 1 and the τ defined in <cit.> is (Ξ e_u^1/2)^-1 up to a constant.) Hence we conclude H̃^*[ g_[I]^2(μt) B_I ]H̃^*[ g_[I]^2(μt) ]H̃^*[ B_I ] 1 ·N^-1D_R. Thus our final bound on the stress error is H^*[R_S] (D_u/D_R)^-1/2 N^-1/2 D_R + N^-1D_R (D_u/D_R)^-1/2 N^-1/2 D_R, since N ≥ D_u/D_R. § NONSTATIONARY PHASE The transport term and the high-frequency interference terms are both high frequency and our treatment involves nonstationary phase, which is a by now a standard tool in convex integration arguments. Interestingly, this application of nonstationary phase and the power loss it gives rise to can be avoided (see Section <ref>). We first introduce a weighted norm. The nonstationary phase weighted norm of F is H̃_M”[F] = max_r ≤1 max_0 ≤|| + r ≤M _ ^r F(N^1/2 Ξ)^|| τ^-r The H̃_M” norm satisfies the usual product rule and chain rules. Also, one has H̃_M-1”[^-1 _i F] N^-1/2 H̃_M[F] We omit the proof of the product and chain rules, since they are almost identical to the proof of Propositions <ref> and <ref>. As for (<ref>), the bound on spatial derivatives is immediate from the definition, so we need only bound _ _i F = _ _i F + __1 _i ũ_^b __2 _b F where the sum ranges over |_1| + |_2| = || ≤ M - 1. We bound this sum by _ _i F (N^12 Ξ)^|| + 1 τ^-1 H̃”_M[F] + (N^12Ξ)^|_1| + |_2| + 1 (Ξe_u^1/2) H̃”_M[F] (N^12 Ξ)^|| + 1 τ^-1 H̃”_M[F] Dividing by ∼ N Ξ yields the result. For any D>0 there is a constant C_D so that the following holds. Whenever G = e^i ξ_I g has integral 0 there is a traceless symmetric tensor field Q^jℓ that satisfies _j _ℓ Q^jℓ = G and the bound H̃^*[Q] ≤C_D ((N Ξ)^-2 + N^-D/2) H̃”_2D + L[g] Consider the function q^jℓ(p) = A |p|^-4p^j p^ℓ + B |p|^-2^jℓ. Then if A and B solve the equations A + B = 1 and A + d B = 0, d =2, we have that q^jℓ(p) is trace-free and satisfies p_j p_ℓ q^jℓ(p) = 1. We write Q^jℓ = Q_(D)^jℓ + Q̃_(D)^jℓ where Q_(D)^jℓ = ^-2 ∑_k = 0^D e^i ξ_I q_(k)^jℓ Q̃_(D)^jℓ = ^jℓ[G - _j _ℓQ_(D)^jℓ] We define the q_(k)^jℓ recursively by g_(0) = g, q_(k)^jℓ = q^jℓ(ξ_I) g_(k) g_(k+1) = - ^-1 [ _j ξ_I _ℓq_(k)^jℓ + _ℓξ_I _j q_(k)^jℓ]] - ^-2 _j _ℓq_(k)^jℓ These inductive rules are defined so that _j _ℓQ_(D) - e^i ξ_I g = e^i ξ_I g_(D+1)^jℓ We claim the following estimates inductively on k. H̃”_2D + L - 2 k [g_(k)] N^-k/2H̃_2 D + L”[g] H̃”_2D + L - 2 k [q_(k)] N^-k/2H̃_2 D + L”[g] Indeed (<ref>) holds for k = 0 trivially. Then (<ref>) holds for k by H̃”_2D + L - 2 k [q_(k)] H̃”_2D + L - 2 k [q^jℓ(ξ_I)] H̃”_2D + L - 2 k [g_(k)] (1 + H̃”_2D + L - 2 k[ξ_I])^2D H̃”_2D + L [g] 1 ·H̃”_2D + L [g] where we applied the product rule and chain rule for the weighted norm and the inductive hypothesis for g_(k). Now we estimate (<ref>) by the product rule, (<ref>) for k, and (<ref>) H̃”_2D + L - 2 (k + 1)[ g_(k+1)] H̃”_2D + L - 2 (k + 1)[ ξ_I] H̃”_2D + L - 2 (k + 1)[ ^-1q_(k)] + ^-2 H̃”_2D + L - 2 (k + 1)[ q_(k)] 1 ·H̃”_2D + L - 2 k - 1[ ^-1q_(k)] + ^-1 H̃”_2D + L - 2 k - 1[ q_(k)] N^-1/2 H̃”_2D + L - 2 k[ ^-1q_(k)] N^-(k+1)/2H̃”_2D + L [g], which concludes the induction. We can now estimate the Q_(D) defined in (<ref>) by first observing H^*[e^i ξ_I] 1 H^*[F] H̃”_L[F] We postpone the proof of (<ref>). The second of these bounds is directly from the definition. We now estimate H^*[Q_(D)] ^-2 ∑_k=0^D H^*[e^i ξ_I] H^*[q_(k)] (<ref>) ^-2 ∑_k=0^D H̃”_L[q_(k)] ^-2 ∑_k=0^D H̃”_2D +L - 2k[q_(k)] ^-2 H̃_2D+L”[g]. This bound suffices to prove (<ref>) for the parametrix. To bound the error, we need a trivial bound for the operator ^jℓ. Specifically H̃^*[^jℓ[F]] H̃^*[F], and we also will use H̃^*[e^i ξ_I ] 1. Taking these two estimates as given, we now have H̃^*[Q̃_(D)] = H̃^*[ ^jℓ[e^i ξ_I g_(D+1) ]] H̃^*[ e^i ξ_I g_(D+1) ] H̃^*[ e^i ξ_I ] H̃^*[ g_(D+1) ] (<ref>) 1 ·N^-D/2 H̃_2D+L”[g], which completes the proof subject to (<ref>) and (<ref>). To prove (<ref>), we observe that e^i ξ_I = 0, so it suffices to bound spatial derivatives. By the chain rule and product rule we obtain _ e^iξ_I ∑_m=0^L _j e^i ξ_I ^m ∏_j=1^m __j ξ_I ∑_m=0^L ^m ∏_j=1^m ^|_j| - 1 _ e^iξ_I ∑_m=0^L ∏_j=1^m ^|_j| ^||, where we have used that derivatives of ξ cost at most , which is smaller than . To prove (<ref>), we first note that ^jℓ is bounded on C^0(^2). For example, ^jℓ≤∑_q P_q ^jℓ∑_q=0^∞ 2^-2q Then H^*[^jℓ F] H^*[F] follows from the fact that commutes with _ and _t. By comparability of weighted norms, this estimate suffices. §.§ High frequency error terms We now apply the nonstationary phase estimate to the high frequency error terms. We start with R_H. There is an important cancellation in this term that was first observed in <cit.>. Namely, since u_I^a _a ξ_I = þ_I m^a(ξ_I) _a ξ_I = 0, we have ∑_I T^a _I _a _I = ∑_I g_[I](μt) e^2 i ξ_I u_I^a (i _a ξ_I) (þ_I + þ_I ) + ∑_I g_[I](μt) e^2 i ξ_I (u_I^a + u_I^a) (_a þ_I + _a þ_I ) For the next computation, let H̃” be a shorthand for H̃”_2 D + L. By nonstationary phase, for any D ≥ 0 there exists a traceless second-order anti-divergence that obeys the estimate H̃^*[R_H^jℓ] (^-2 + N^-D/2)( A + B) A = H̃”[g_[I](μt)] H̃”[u_I ] H̃”[ξ_I ] (H̃”[þ_I ] + H̃”[þ_I ]) B = H̃”[g_[I](μt)] H̃”[u_I + u_I] H̃”[þ_I + þ_I] For all these terms inside the weighted norms, we claim that the bounds for the H” norm of each term are the same as the bound we have stated for the C^0 norm of each term. Indeed, for each of these terms, a spatial derivative costs at most = N^1/LΞ, which is smaller than N^1/2Ξ, while an advective derivative costs at most μ = Ξ^3/2 N^1/2 D_R^1/2, which is smaller than τ^-1 = (N Ξ)^3/2D_R^1/2. Combining (<ref>), Proposition <ref>, Proposition <ref>, and the following estimate H̃”[u_I^ℓ] = H̃”[m^ℓ(ξ_I)] H̃”[þ_I] H̃”[þ_I] ^1/2 D_R^1/2 yields (recall = N Ξ) A ( N Ξ) ·1 ·^1/2 D_R^1/2N 1 (^1/2 D_R^1/2) B 1 ·^1/2 D_R^1/2 (Ξ^1/2 D_R^1/2) Hence we conclude, H̃^*[R_H^jℓ] (^-2 + N^-D/2)( ΞD_R) Recall that N ≥ N^4ηΞ^4 η∼^4η. Choosing D large depending on η, we have H̃^*[R_H^jℓ] D_RN. The other high frequency term is the transport term. Since the advective derivative annihilates the phase function, we have _j _ℓR_T^jℓ = ∑_I g_[I]'(μt) e^i ξ_I [ þ_I + þ_I ] + ∑_I g_[I](μt) e^i ξ_I [ þ_I + þ_I + u_I^a (_a þ_I + _a þ_I) ] Nonstationary phase with the same choice of D as before yields a solution of weighted norm H̃^*[R_T] ^-2 μH”[g_[I]'(μt)] H”[þ_I + þ_I] + H”[g_[I]] (H”[þ_I] + H”[ þ_I ] + H”[u_I] H”[þ_I + þ_I] ) For these terms it is again true that the H” weighted norm is the same size as the bound on the C^0 norm modulo constants, since spatial derivatives cost at most N^1/LΞ < N^1/2Ξ, while advective derivatives cost at most a factor of _t^-1 = (Ξ e_u^1/2) (D_u/D_R)^1/2 N^1/2≤τ^-1 = (N Ξ)^3/2 D_R^1/2. Combining (<ref>)-(<ref>), Proposition <ref>, and Proposition <ref>, we therefore obtain H̃^*[R_T] (N Ξ)^-2 μ^1/2 D_R^1/2 + (N Ξ)^-2 ( (Ξe_u^1/2) ^1/2 D_R^1/2 +^1/2 D_R^1/2 Ξ^1/2 D_R^1/2 ) (N Ξ)^-2 μ^1/2 D_R^1/2 ∼N^-1 D_R. Both the estimate for R_H and the estimate for R_T are satisfactory for the Main Lemma, since we have N^-1 D_R ≤ (D_u/D_R)^-1/2 N^-1/2 D_R. §.§ How to avoid nonstationary phase We include this section to note that one can avoid nonstationary phase in the proof in a way such that the only source of double exponential frequency growth occurs during the Newton step. To do so, let ũ^ℓ = u_^ℓ + T^ℓ w, where w is the Newton correction. Instead of transporting the phase functions by the flow of ũ^ℓ as we have done, first apply a frequency truncation P_≤ q_ to T^ℓ w and transport the phase functions by the resulting frequency localized vector field. With such a frequency localization, both the high frequency interference terms and the transport term now live at frequency ∼, and one can simply apply an operator ^jℓ to both those terms to find a suitable anti-divergence that gains a smallness of ^-2. This technique avoids nonstationary phase (which was also avoided in <cit.>), and also avoids the power loss in frequency incurred during the nonstationary phase, which is important for deriving an endpoint type result <cit.>. However, it comes with two complications. * The estimates of the convex integration step are a bit different in terms of powers of N although the final bounds for R are the same. * One has to handle an error term of the form (T^ℓw - P_≤q_ T^ℓw) + T^ℓ( w - P_≤q_ w) The latter term can be treated similarly to the mollification term addressed below. §.§ The mollification error The term R_M is also a new error term compared to <cit.>. In those works there was no need to regularize þ since it could be enforced that þ had compact frequency support. In other words, we had þ = þ_. Here þ does not have compact frequency support, so we have to bound this term, which resembles the term (<ref>). Again we use our simplified version of the observation in <cit.> showing how to write the nonlinearity in a divergence form. We are estimating a solution to _ℓ R_M^jℓ = [T^jþ - T^j þ_] + [T^j ] (þ - þ_) We know Θ̂⊂{ξ : /10^2 ≤ |ξ| ≤ 10^2}. We decompose _j R_M^jℓ into the sum of three kinds of terms (HH, HL and LH) (or really five kinds of terms, but we can group them into three kinds). _j R^jℓ_M = ∑_q P_q T^ℓ(þ-þ_ϵ) P_q+1 + T^ℓ P_q+1 P_q(þ-þ_ϵ) + Similar + ∑_q P_≤ q-1 T^ℓ (þ-þ_ϵ) P_q+1Θ + ∑_q P_≤ q-1 (þ-þ_ϵ) T^ℓ P_q+1Θ + ∑_q P_q+1(þ-þ_ϵ) T^ℓ P_≤ q-1Θ + ∑_q P_q+1 T^ℓ(þ-þ_ϵ) P_≤ q-1Θ (Recall that T^ℓ and P_k commute for any k, and same for P_≤ k-1) Define q^λ∈ by q^λ :≈log_2(λ). We have that the Fourier support of is essentially in a single dyadic shell (or a bounded number of shells). By consideration of frequency support, the five sums can be simplified as sums over, respectively, q ∼ q^λ, q ∼ q^λ, q ∼ q^λ, q ≥ q^λ, q ≥ q^λ. To each of these we associate an antidivergence, and we just have to bound that antidivergence. We will only do this for three cases (one being the High-High), which are representative of all five. Looking at the very first term in (<ref>), we use the divergence form principle of Section <ref> to define R_MHq^jℓ = K_q ∗ [þ-þ_,] = K_q ∗ [P_≈ q^λþ, ] = ∫ (þ(x-h_1) - þ_(x-h_1)) (x-h_2) K_q^jℓ(h_1,h_2) dh_1 dh_2 For all 0 ≤ r + || ≤ L we bound __t^r R_MHq ∑_q ∼ q^λK_q_L^1__1_t^r_1(þ - þ_)_0 __1_t^r_1_0 ∑_q ∼ q^λ 2^-q [(N Ξ)^|_1|τ^-r_1e_u^1/2N] [ (N Ξ)^|_2|τ^-r_2 H^*[]] (N Ξ)^-1 (N Ξ)^||τ^-re_u^1/2N (N Ξ)^1/2 D_R^1/2. (N Ξ)^||τ^-rD_RN, where in the last line we used N ≥ D_u/D_R, while in the second line we used Lemma <ref> and the bound on obtained in the proof of Proposition <ref>. Our next representative term is R_MHL^jℓ = ∑_q ∼q_ _a^jℓ[ P_q+1(þ- þ_) P_≤q-1 T^a ] = ∑_q ∼q__a^jℓP_≈q (þ- þ_) P_≈ T^a where the representation in the second line is due to the Fourier support of being in |ξ| ∼. We bound this term by _ _t^r (<ref>) _a^jℓP_≈q [(NΞ)^|_1| τ^-r_1 e_u^1/2N] [(NΞ)^|_2| τ^-r_2 H^*[]] ∑_q ≥q_ 2^-q (N Ξ)^|| τ^-r e_u^1/2N (N Ξ)^1/2 D_R^1/2 (N Ξ)^|| τ^-r D_RN. Here in the second line we used the trivial observation that H[T^ℓ P_q ] H[] by the fact that T^ℓ P_q is bounded on C^0 and commutes with spatial derivatives and _t. In the last line we again used N ≥ D_u/D_R. The last of the three representative terms is ∑_q∼a^λ P_≤q-1 T^ℓ(þ-þ_) P_q+1. We write R_MLH^jℓ = ∑_q∼q^λ _a^jℓ P_≈q [ P_≤q-1 T^a(þ-þ_) P_q+1 ] _ _t^r (<ref>) ∑_q∼q^λ ^-1 __1 _t^r_1 (T þ- T þ_) __2 _t^r_2 ^-1[(N Ξ)^|_1| τ^-r_1 e_u^1/2N] [(N Ξ)^|_2| τ^-r_2 H[]] (N Ξ)^-1 (N Ξ)^|| τ^-r e_u^1/2N (N Ξ)^1/2 D_R^1/2 (N Ξ)^|| τ^-r D_RN Here again we used Lemma <ref>, the bound on obtained in the proof of Proposition <ref>, and N ≥ D_u/D_R. Combining these estimates we have H^*[R_M] D_RN. § THE MAIN LEMMA IMPLIES THE MAIN THEOREM We start with the following auxiliary theorem, which is enough to prove regularity of solutions but in itself is not enough to prove nontriviality. Nontriviality will be a corollary of the h-principle. For any > 0 be given, we choose L ≥ 7 and η > 0 depending on so that the parameter = 6L + 4 η≤^3 Let be the constant in the Main Lemma associated to this choice of η, L. Let > 0 be given. There is a constant C_ depending on and an integer L such that the following holds. Let (þ_0, R_0) be an SQG-Reynolds flow with frequency energy levels of order L bounded by (Ξ_0, D_u,0, D_R,0) and with compact support contained in an interval J_0. Then there exists a solution þ to SQG of class ||^-1/2þ∈ C^1/2 - 2 whose time support is contained in a C_ (Ξ_0 e_u,0^1/2)^-1 neighborhood of that of (þ_0, R_0) such that   ||^-1/2(þ- þ_0) ≤C_D_R,0^1/2 We define a sequence of Euler-Reynolds flows (þ_n, R_n) by iteration of the Main Lemma. We set (Ξ, D_u, D_R)_(0) = (Ξ_0, D_u,0, D_R,0) and evolve according to the parameter rules Ξ_(k+1) = N_(k) Ξ_(k) D_u,(k+1) = D_R,(k) D_R,(k+1) = D_R,(k)^1+Z. where Z is to be chosen depending on and on the initial frequency energy levels. These rules will imply a double exponential decay of D_R,(k), but for the moment we impose that Z ≥max{ D_R,(0)^, D_R,(0)^1+} in order to ensure that D_R,(1)≤ 1 and that D_R,(k+1) ≤12 D_R,(k) for all k. Our choice of N_(k) is dictated by the estimate in the Main Lemma: D_R,(k+1) = (D_uD_R)_(k)^-1/2 N^-1/2_(k) D_R,(k) ⇒N_(k) = ^2 Z^2 (D_u/D_R)_(k)^-1 D_R(k)^-2. It will be convenient to phrase the parameter evolution rules in terms of logs ψ_(k) ≡[logD_R(k), log(D_u/D_R)_(k), logΞ_(k)]^t ψ_(k+1) = [ - logZ; logZ; log(^3 Z^2) ] + [ 1 + 0 0; - 0 0; -2 -1 1 ] ψ_(k) We call the 3 × 3 parameter evolution matrix appearing here T_. The most delicate task in this framework is to check that N_(k) is admissible, since this condition barely holds; namely, we need N_(k) ≥N_(k)^6L + 4 η Ξ_(k)^4 η (D_u/D_R)_(k). With = 6L + 4 η defined as above, it is enough to check that Z^-2+2 Ξ_(k)^ (D_u/D_R)_(k)^2- D_R,(k)^2(1-) ≤1 Since the power of Z is negative, it is clear that Z can be chosen large enough so that this inequality holds at k = 0. Now suppose k ≥ 1. In this case one has (D_u / D_R)_(k) = Z^11+ D_R,(k)^-/(1+) Taking logs of (<ref>), we need to check: (-2 + 2 +2 - 1+) logZ + logΞ_(k) + (2 (1 - ) - 1+ (2-)) logD_R,(k) ≤0 We prove this inequality by induction, as we have already considered the case k = 0. Letting _(k) f = f_(k+1) - f_(k) denote the discrete difference operator, we need only check that _(k) logΞ_(k) + (2 (1 - ) - 11+ (2-)) _(k) logD_R,(k) is negative. Since D_R,(k)≤ 1 for all k ≥ 1, we have (<ref>) ≤_(k) logΞ_(k) + (^2 - O()) _(k) logD_R,(k) ≤3 (logZ + log) - 2 logD_R,(k) + (^2 - O())( logD_R,(k) - logZ ) ≤3 log+ (-^2 + O()) logZ Recalling that ≤^3, we now choose Z depending on and so that the right hand side is negative as desired. Thus our choice of N_(k) is admissible for all k. Applying (<ref>), our solution þ obeys ||^-1/2 (þ- þ_0) ∑_k=0^∞ ||^-1/2 W_(k) ∑_k=0^∞D_R,(k)^1/2 ≤C_D_R,(0)^1/2. Note that the convergence of this series combined with boundedness of the nonlinearity in L_t^2 Ḣ^-1/2 also shows that þ is a weak solution to SQG. A similar geometric series bounds the size of the increase in time support by ∑_k (Ξ_(k) e_u,(k)^1/2)^-1 (Ξ_0 e_u,0^1/2)^-1 hence the time support is bounded as claimed. To check the regularity of the solution, we follow the method of <cit.> and first compute an eigenvector for the 1 + eigenspace. We seek a vector in the null space of T_ - (1 + ) with a negative first coordinate. An example is given by ψ_+ = [ -(1+); ; 1 + 2 ] In terms of eigenvectors (ψ_+, ψ_0, ψ_1) for the (1+, 0, 1) eigenspaces respectively, we can decompose ψ_(k) = c_+,(k) ψ_+ + c_0,(k) ψ_0 + c_1,(k) ψ_1 The term that dominates is the ψ_+ term, since one can check that c_+,(k) ≥c (1 + )^k, c > 0 |c_0,(k)| + |c_1,(k)| = O(k), (see for instance <cit.>). The fact that the ψ_+ term dominates is similar to what happens when one iteratively applies the matrix T_ to a fixed vector. We now compute the regularity of our solution. Using the interpolation inequality f _C^ f_C^0^1- f _C^0^, the estimate (<ref>) on W, the formula (<ref>) for N_(k) and < 1, one has log ||^-1/2 W _C^ ≤log+ log(N_(k) Ξ_(k)) + 12 logD_R,(k) ≤log(^4 Z^2) + [ 12 - 2 , -, ] ψ_(k), where the last line refers to the linear pairing of the row vector with the column vector ψ_(k). From (<ref>) and (<ref>), we see that the right hand side goes to -∞ exactly when the same row vector applied to ψ_+ in (<ref>) gives a negative value. In conclusion, ||^-1/2þ∈ L_t^∞ C^ whenever < 12 ( 1 + 1 + 3 + 2 ^2 ). Using linearization, one sees that = 1/2 - 2 satisfies this inequality for sufficiently small, hence Theorem <ref> is proven. §.§ h-Principle Let > 0 be given and let L and C_ be as in Theorem <ref>. Let f : (0,T) ×^2 → be a smooth compactly supported function that conserves the integral. That is, ∫_^2 f(t,x) dx = 0, t. We approximate f by the sequence f_n = P_≤ n f, which satisfy sup_n _ _t^r f_n _ _t^r f, 0 ≤||, r lim_n →∞ ||^-1/2(f_n - f) = 0. Using the order -2 operator ^jℓ, define R_n^jℓ = ^jℓ[_t f_n + _a [ f_n T^a f_n] ] so that (f_n, R_n) define an SQG-Reynolds flow with compact frequency support. (It is important at this point that the right hand side has mean zero at every time.) Furthermore, we have a uniform bound sup_n R_n ≤2D_R,-1 By (<ref>) we can choose Ξ_-1,n suffiently large and going to +∞ so that (f_n, R_n) is an SQG-Reynolds flow with frequency energy levels to order L bounded by (Ξ_-1, n, D_R,-1, D_R,-1) that has compact frequency support in frequencies below Ξ_-1,n. To this SQG-Reynolds flow we apply the Main Lemma from <cit.>. Let N_-1,n be a sequence tending to +∞. According to this Lemma, for any N_-1,n there is a second SQG Reynolds flow, which we call (þ_0,n, R_0,n), þ_0,n = f_n + W_-1,n, so that the following hold _t (þ_0,n, R_0,n) ⊆{ t + t'  :  t ∈(f_n, R_n), |t'| ≤(Ξ_-1,n D_R,-1^1/2)-1 } ||^-1/2 W_-1,n ≤C_L D_R,-1^1/2 ||^-1/2 W_-1,n = _i Y_n^i, Y_n^i ≤Ξ_-1,n^-1 D_R,-1^1/2 and so that the frequency energy levels of (þ_0,n, R_0,n) are bounded to order L by (Ξ_n,(0), D_u,(0), D_R,(0)) = ( C_L N_-1,n Ξ_-1, D_R,(-1), D_R,-1N_-1,n^3/4 ) Now apply our approximation theorem, Theorem <ref>, to get an SQG solution þ_n of class ||^-1/2þ_n ∈ L_t^∞ C^1/2- 2 with ||^-1/2 (þ_n - þ_0,n) _C^0 ≤C_D_R,-1^1/2N_-1^3/4 and with time support contained in _t þ_n ⊆{ t + t'  :  t ∈_t (þ_n,0, R_n,(0)), |t'| ≤C_(Ξ_n,(0) D_R,-1^1/2)^-1 } We now claim that ||^-1/2 (þ_n - f) → 0 in L^∞ weak-*. To see this claim, let g ∈ L^1((0,T)×^2) and let > 0 be given. We will choose a small parameter η. Choose a g_η∈ C_c^∞ ((0,T) ×^2) within η of g in L^1. We write ∫g ||^-1/2(f - þ_n) dx dt = I + II + III I = ∫(g - g_η) ||^-1/2(f - þ_n) dx II = ∫g_η||^-1/2(f - þ_0,n) dx III = ∫g_η||^-1/2( þ_0,n - þ_n ) dx We bound |I| ≤η( ||^-1/2 f + sup_n ||^-1/2 þ_n ) Note that the sup exists due to (<ref>) and (<ref>). Now fix the choice of η so that this term is bounded by / 3. Then we use (<ref>) and integration by parts to bound |II| ≤( ∫|g_η| dx) Y_n ≤( ∫|g_η| dx) D_R,-1^1/2Ξ_-1,n The latter bound goes to 0 as n gets large since we assumed Ξ_-1,n tends to ∞. Finally we have |III| ≤( ∫|g_η| dx) ||^-1/2(þ_0,n - þ_n) ≤( ∫|g_η| dx) C_D_R,-1^1/2 N_-1,n^-3/4 As long as we take N_-1,n to go to infinity, this term is also arbitrarily small. From this estimate we conclude that ||^-1/2þ_n → ||^-1/2 f in L^∞ weak-*. Furthermore, we have þ_n ⊆{ t + t'  :  t ∈f, |t'| ≤C_(Ξ_-1 D_R,-1^1/2)^-1 }, uniformly in n, which can be made arbitrarily close to f by taking Ξ_-1 large. § APPENDIX §.§ Existence of solutions to equation (<ref>) Let Φ w be a solution to the equation: D_tΦ w+T^ℓw∇_ℓθ_=f with (Φ w)[0]=w_0. Subtracting the equation for Φw̃ from Φ w, we get: D_t(Φ w-Φw̃)+T^ℓ(w-w̃)∇_ℓθ_=0 with (Φ w-Φ w)[0]=0. Let s ≥ 0 be given. Using the notation [∇_a⃗,u_·∇]=∑1_|a⃗_2|≤ s-1(∇_a⃗_1u^j_∇_a⃗_2∇_j),[We only want to take up to s derivatives of Φ w-Φ w.] we differentiate the equation with ∇_a⃗ to get: D_t∇_a⃗(Φ w-Φw̃)+[∇_a⃗,u_·∇](Φ w-Φw̃) +|a⃗_2|≤ s-1∼∑∇_a⃗_1T^ℓ(w-w̃)∇_a⃗_2∇_ℓθ_=0. Multiplying this equation by ∇_a⃗(Φ w-Φw̃) and integrating by parts, 1/2∂_t_(Φ w)-_(Φ w)_2^2 + ∫( [∇_a⃗,u_·∇](Φ w-Φw̃) +|a⃗_1|≤ s-1∼∑∇_a⃗_1T^ℓ(w-w̃)∇_a⃗_2∇_ℓθ_)_(Φ w-Φ w) dx=0, where ∫ u_^j_j (_Φ w-_Φ w)^2 dx=0 due to the divergence-free property of u_. Integrating on [0,t], and using Hölder's inequality, we have ∇_a⃗(Φ w) - ∇_a⃗(Φw̃)_2(t) ≲∫_0^t ( [∇_a⃗, u_ϵ·∇](Φ w - Φw̃)_2 + ∇_a⃗_1 T^ℓ (w - w̃) ∇_a⃗_2∇_ℓθ_ϵ_2 ) dτ ≲ t ∇_a⃗_1 T^ℓ (w - w̃)_L^∞ L^2 + t u_ϵ_L^∞ C^s-1Φ w - Φw̃_L^∞ H^s. In the first line, we used Cauchy-Schwarz. Here, we used that ∇_a⃗_2∇_ℓθ_ϵ_L^∞≲ 1, which is true since s≤ L and hence it is bounded by Ξ^L e_u^1/2, a bounded quantity. (Recall that |_2|≤ s-1.) Next, note that T^ℓ is bounded on H^s. Thus we have T^ℓ (w - w̃)_H^s≲w - w̃_H^s. Thus ∑_||≤s_(Φw)-_(Φw)_2(t) tw-w_L^∞H^s + t Ξ^s-1 e_u^1/2 Φw-Φw_L^∞H^s and indeed ∑_||≤ s_(Φ w)-_(Φ w)_L^∞ L^2 tw- w_L^∞ H^s + t Ξ^s-1 e_u^1/2Φ w-Φ w_L^∞ H^s. By taking t sufficiently small, we can absorb the last term into the left-hand side. Taking t smaller if necessary, we obtain Φ w-Φ w_L^∞ H^s≤ C w- w_L^∞ H^s, C ∈ (0,1). We apply the contraction mapping theorem and conclude that there exists a unique fixed point w ∈ L^∞_t H^s_x of Φ, which solves equation (<ref>). Inspecting the proof, the timescale of existence is bounded from below by C^-1 (max{þ_L_t^∞ C^s, u__L_t^∞ C^s} )^-1. Consequently, if þ and u_ are smooth, the solution is global in time and smooth in the spatial variables. §.§ The Divergence Form Principle Let ∈ and let P_,1 and P_,2 be frequency localizing operators adapted to frequencies of size |ξ| ∼ with multipliers χ_,1 and χ_,2. That is, P_λ,i f(ξ) = χ_,i(ξ) = χ_1,i(ξ). The following theorem is proven in <cit.>. It traces back to a calculation in <cit.> that was generalized and streamlined in <cit.>. Let be an operator with odd symbol m that is degree $̱ homogeneous and smooth away from0. Then for smoothf,gone can write P_,1 f P_,2 g + P_, 1g P_,2 f= _j [ K_^j∗[f,g] ] K_^j∗[f,g] = ∫_^d ×^d f(x - h_1) g(x - h_2) K_^j(h_1, h_2) dh_1 dh_2 K_^j(h_1, h_2) = ^2 d + -̱ 1 K_0^j( h_1, h_2) whereK_0^jare Schwartz. In the specific case of= T^ℓis the multiplier for SQG, the tensorK_^jℓis trace free and satisfies K_^jℓ(p, -p) = ^j m^ℓ(p) + ^ℓ m^j(p) for allpsuch thatχ_,1(p) = χ_,2(-p) = 1. The proof follows along the same lines as <cit.>. The main idea is to express the product as a convolution in frequency space, then Taylor expand to obtain a divergence form. That is, letting ξ be the Fourier variable and η the integration variable for the convolution, we Taylor expand the sum m(ξ - η) + m(η) = m(ξ - η) - m(-η) = ξ_j ∫_0^1 ^j m(σξ - η) dσ using oddness of m and we observe that the right hand side has a divergence form in physical space. By the argument in <cit.>, it suffices by an approximation to obtain the divergence form on ^2 for f, g Schwartz functions. Let Q denote the left hand side of (<ref>). Then the Fourier transform of the product becomes a convolution and we have Q̂(ξ) = ∫_^2 [m(ξ - η) + m(η)] P_,1f(ξ - η) P_, 2g(η) dη = ∫_^2 [m_(ξ - η) + m_(η)] P_,1f(ξ - η) P_, 2g(η) dη where m_(ξ) = χ(ξ/) m(ξ) is a version of m localized by a bump function χ(ξ/). Using oddness of m_ and Taylor expanding we obtain Q̂(ξ) = ∫_^2 [m_(ξ - η) - m_(-η)] P_,1f(ξ - η) P_, 2g(η) dη = ξ_j ∫_0^1 d∫^j m_( ξ - η ) P_,1f(ξ - η) P_, 2g(η) dη The result is now clearly in divergence form. Further computation of the inverse Fourier transform (see e.g. <cit.>) shows that it has the bilinear convolution form (<ref>) with K^j the Schwartz functions defined in Fourier space by K^j(ζ,η) = χ_,1(ζ)χ_,2(η) (-i) ∫_0^1 ^j m(ζ - (1-) η) d. We also use a version of this principle for even multipliers. Let be an operator with even symbol m that is degree $̱ homogeneous and smooth away from0. Then for smoothf,gone can write P_,1 f P_,2 g - P_, 1g P_,2 f = _j [ K_^j∗[f,g] ] K_^j∗[f,g] = ∫_^d ×^d f(x - h_1) g(x - h_2) K_^j(h_1, h_2) dh_1 dh_2 K_^j(h_1, h_2) = ^2 d + -̱ 1 K_0^j( h_1, h_2) where theK_0^jare Schwartz functions. What is crucial here is the minus sign in (<ref>) instead of the plus sign in (<ref>). The proof is essentially the same as the case of an odd multiplier, but this time one starts withm(ξ - η) - m(η) = m(ξ - η) - m(-η), since the multiplier is even. §.§ Glossary * þ: The scalar field in the SQG equation * u: The velocity field in the SQG equation, defined as u^ℓ = T^ℓþ = ^ℓ a_a ||^-1þ * m^ℓ: the Fourier multiplier in the mSQG equation, m^ℓ(p) = ^ℓ a (i p_a)|p|^-1 * R, R^jℓ: The symmetric traceless tensor field in the SQG Reynolds equations * Ξ, _u, _R: Non-negative numbers representing the frequency energy levels of an SQG-Reynolds flow * D_t: The advective derivative, defined as D_t := _t + T^ℓþ_ℓ * : A multi-index for spatial derivatives = (a_1, a_2, …, a_||), 1 ≤ a_i ≤ d. * N: A parameter used in the main lemma, satisfying a certain lower bound. * η: A positive constant used in the main lemma * : Defined as = N^1/L, where L is a constant satisfying L ≥ 7 * þ, u, R: The new SQG-Reynolds flow obtained in the main lemma * W: The correction term in the new scalar field þ = þ + W * : A length scale defined as = N^-1/LΞ^-1 = ^-1Ξ^-1 * q_ or : An integer close to log_2(^-1) * þ_, u_: The coarse scale scalar field and velocity field, defined using a Littlewood-Paley projection operator * : The coarse scale advective derivative, defined as = _t + u_· * R_: The regularized error tensor, obtained by mollifying R in space * w: The Newton perturbation in the new scalar field þ = þ + w + * : The oscillatory perturbation in the new scalar field þ = þ + w +, defined as a sum of waves = ∑_I _I ≈∑_I þ_I e^i ξ_I * ũ_ = u_ + T^ℓ w: The coarse scale velocity field following the Newton step. * = _t + ũ_·: The coarse scale advective derivative following the Newton step. The following symbols are used in the construction and analysis of the Newton perturbationwand the oscillatory perturbation. * μ: An inverse time scale used in the construction of the Newton perturbation w. * τ: A time scale used in the construction of the Newton perturbation w. b is a small geometric constant chosen after line (<ref>). * _x: A length scale used in the mollification of the error tensor R. It is defined as _x = N^-1/LΞ^-1. * _I: A slowly varying smooth function used in the construction of the oscillatory perturbation . It is chosen in a later part of the analysis. * ξ_I: The oscillation direction of each wave _I in the oscillatory perturbation . It satisfies ξ_I being reasonably close to an element of the set F = ± (1,2), ± (2,1). * : The frequency of the oscillatory waves in the perturbation . * B^jℓ(p): A tensor-valued function defined as B^jℓ(p) = -i(^j m^ℓ(p) + ^ℓ m^j)(p), where m^ℓ(p) = i ^ℓ a p_a |p|^-1 is the multiplier for SQG. * F_J = {w̅_J, z̅_J, r̅_J } Here is a glossary about the relative sizes of the various nonnegative numbers mentioned: * Ξ: A large parameter that represents the frequency level of the scalar field θ. It satisfies Ξ≥ 1. * e_u: Defined as e_u = Ξ_u, where _u is a nonnegative number. The quantity e_u represents the energy level of the velocity field u. We have e_u ≥ 1. * e_R: Defined as e_R = Ξ_R, where _R is a nonnegative number. The quantity e_R represents the energy level of the stress tensor R. We have e_R ≥ 1. * _u: A nonnegative number that satisfies _u ≥_R. It is related to the energy level of the velocity field u through e_u = Ξ_u. * _R: A nonnegative number that satisfies _R ≤_u. It is related to the energy level of the stress tensor R through e_R = Ξ_R. * L: An integer ≥ 7 counting the number of derivatives recorded in the Definition of frequency energy levels. * N: A large parameter that satisfies the lower bound (<ref>). We have N ≥ 1. * Ξ: Defined as N^1/LΞ, where L ≥ 7 is an integer. We have Ξ̂≥Ξ. * μ: Defined as μ = Ξ N^1/2 e_R^1/2. * τ: τ = b (log)^-1 (Ξ e_u^1/2)^-1 * _t: N^-1/2 (D_u/D_R)^-1/2 (Ξ e_u^1/2)^-1. * : ∼ N Ξ, ∈ 2 π. * τ^-1: τ^-1 = (N Ξ)^3/2 D_R^1/2. First defined while proving a bound for R_MHH. The relative sizes of these nonnegative numbers can be expressed as: * _R ≤_u * e_R ≤ e_u * Ξ e_u^1/2≤τ^-1≤μ≤_t^-1≤τ^-1 * Ξ≤Ξ≤ * 1 ≤ (D_u/D_R) (NΞ)^4 η N^6/L≤ N. The parametersNandΞare large, whileηis small. The quantitiese_uande_Rare large.abbrv
http://arxiv.org/abs/2407.03290v1
20240703172309
Thermal and mechanical properties and the structural phase transition under pressure in $A$In$_2$As$_2$ ($A$ = Ca, Sr, Ba)
[ "Wen-Ti Guo", "Zhigao Huang", "Jian-Min Zhang" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
UTF8gbsn [Corresponding author]jmzhang@fjnu.edu.cn 1 Fujian Provincial Key Laboratory of Quantum Manipulation and New Energy Materials, College of Physics and Energy, Fujian Normal University, Fuzhou 350117, China 2 Fujian Provincial Collaborative Innovation Center for Advanced High-Field Superconducting Materials and Engineering, Fuzhou, 350117, China § ABSTRACT Experimental results that BaIn_2As_2 and Ca(Sr)In_2As_2, which are the same class of alkali metal compounds, belong to different structural phases have puzzled the current materials physics community. Here, we investigate the pressure-induced structural phase transition of AIn_2As_2 and its accompanying improvement in mechanical and thermal properties. Firstly, the structural stability of the materials and their structural phase transitions under pressure are characterised by enthalpy and double-checking by phonon dispersion spectrum. We also confirm the structural phase transitions of the hexagonal and monoclinic phases from a group-theoretic point of view, associating their symmetry operations using transformation matrices. In terms of mechanical properties, we propose an effective scheme for pressure modulation of the anisotropy of AIn_2As_2 materials and to induce the transformation of AIn_2As_2 from isotropic to anisotropic (hexagonal) and from brittle to ductile (hexagonal and monoclinic). Meanwhile, we find the negative Poisson's ratio phenomenon under compression and tension, which is favorable for a wide range of applications of this series of materials in aerospace, medicine, sensors, etc. In terms of thermal properties, applying pressure will enhance the structural phase transition temperature of AIn_2As_2 materials to near room temperature. We further give direct evidence of phonon softening based on group velocity calculations and reveal that phonon softening prevents the heat capacity from reaching the Dulong-Petit limit. Our study provides a theoretical basis for selecting stable structural phases and pioneering thermodynamic property studies of the thermoelectric topological candidate material AIn_2As_2. Thermal and mechanical properties and the structural phase transition under pressure in AIn_2As_2 (A=Ca, Sr, Ba) Jian-Min Zhang^1,2 July 8, 2024 ================================================================================================================ § INTRODUCTION EuX_2As_2 (X = Cd, In, Sn), as a series of topologically magnetic material, has been of great interest to topological thermoelectric community because of the intrinsically novel properties. The layered structure of the Zintl-Klemm phase, EuSn_2As_2, can be easily peeled off <cit.>. Both theoretical and experimental-based studies have demonstrated that it is an intrinsically magnetic topological insulator <cit.>. Other related studies applied high pressure modulation of EuSn_2As_2 material to achieve a continuous transition from R3̅m phase to C2/m phase <cit.>, and to the high-pressure rhombohedral phase <cit.>. In addition, EuCd_2As_2 is considered to be a Dirac semimetal <cit.>. It will undergo a topological phase transition by reforming the magnetic moment direction under the action of pressure <cit.> or an electric field <cit.>. Similarly, as an intrinsic magnetic topological insulator, EuIn_2As_2 has higher-order topological insulator and axion insulator features <cit.>. It also exists magnetic configuration-dependent topological phase transition<cit.>. Alkaline earth (A) metal substituted Eu positions will achieve rich non-magnetic topological states, which are reflected in both Sr(Ba)Cd_2As_2<cit.> and SrSn_2As_2<cit.>. Likewise, our previous paper reported that AIn2As2 (A = Ca, Sr, Ba) can achieve both metal-insulator phase transitions and topological quantum phase transitions under the action of pressure<cit.>. Meanwhile, CaIn_2As_2 and SrIn_2As_2 have been reported to have a P6_3/mmc phase, while BaIn_2As_2 possesses a P2/m phase<cit.>. Why do compounds of Ba-, also an alkaline earth metal, behave in a different phase to compounds of Ca- and Sr-? This is a key question that needs to be urgently explored. Perhaps there are structural phase transitions between them? If there exists structural phase transition, what is the pattern? Are there other intrinsic physical properties that might accompany them? These are the crucial questions that have plagued the experimental field and the reasons that have stimulated research into them in the field of theoretical computing. As a key means of experimentally regulating the physical properties of materials, pressure is also a research method of particular interest for theoretical calculations. Pressure often induces interesting and important potential properties in materials. For example, a topological phase transition will be achieved by applying hydrostatic pressure in MnBi_2Te_4<cit.> and Cd_3As_2<cit.>. The pressure will also obtain a metal-insulator phase transition accompanied by a change in the band gap<cit.>. In general, the application of pressure inevitably results in structural phase changes. Hydrostatic pressure modulation of the MnBi_4Te_7 appears as a structural phase transition<cit.>. Tensile and compressive strains lead to multiple phase transitions in photovoltaic films CsMI_3 (M = Pb, Sn)<cit.>. Related studies have reported that narrow bandgap SrX_2As_2 (X=Cd, Sn) materials are easily modulated by external fields<cit.>. Pressure is an effective means of studying the properties and relationships between the different structural phases of a material. Here, we apply pressure to AIn_2As_2 with three different structural phases (P6_3/mmc, R3̅m, and P2/m) to achieve materials rich in structural phase transition. CaIn_2As_2 and SrIn_2As_2 exhibit a P6_3/mmc or R3̅m phase at low pressure, which transforms into a P2/m phase with increasing pressure. However, BaIn_2As_2 tends to form a P2/m phase. The phases of the three materials are in agreement with the reported experimental results<cit.>. The stability of these phases is next determined by phonon spectral calculations. And we further analyzed the change of the phonon irreducible representation of the pressure-induced structural phase transition. Furthermore, we have investigated the effect of hydrostatic pressure on the mechanical and thermodynamic properties of the material. Our paper explains the physical properties of the structural phase differences between BaIn_2As_2 and CaIn_2As_2 (SrIn_2As_2) and assesses their structural stability and thermal and mechanical characteristics. A brief synopsis of the subsequent content of this paper is given here. Section <ref> presents the crystal structures and detailed calculations of DFT. Section <ref> focuses on the results of the study and discussion. Sub-sub-section <ref> investigates the energy and lattice structure characterization at different pressures and proposes structural phase transitions. Sub-sub-section <ref> investigates the structural phase transitions in pressure-modulated systems employing phonon spectroscopy. Sub-sub-section <ref> explains the physical nature of structural phase transitions utilizing symmetry shifts in point groups. Sub-section <ref> focuses on the mechanical and thermal properties of materials under pressure modulation. Sub-sub-section <ref> analyses the crystalline anisotropy of the material. Sub-sub-section <ref> characterizes the material's thermal properties and discusses the realization of negative Poisson's ratio (NPR) performance modulation in compression and tension. Sub-sub-section <ref> characterizes the material's thermal properties and reports the pressure-boosted AIn_2As_2 structural phase transition temperature. Sub-sub-section <ref> reveals the phenomenon of zero group velocity (ZGV) induced by softening of phonon modes under pressure and the enhancement of thermal conductivity. Section <ref> provides a summary of this study. Appendix <ref> presents the details of the remaining auxiliary calculation methods. Appendix <ref> gives information on the calculation of phonon dispersion spectra and thermodynamic parameters. Appendix <ref> presents the relevant parameters for the characterization of mechanical properties, including equations for the calculation of elastic modulus, mechanical stability criterion, crystal anisotropy, calculation of chemical bonding information, and hardness analysis. In the Appendix <ref>, we present and discuss other complementary results, such as the evolution of lattice parameters, symmetry transformation of point group, elastic modulus analysis, chemical bonding, toughness and brittleness, and hardness prediction. Supplemental Material (SM) <cit.> gives additional figures related to mechanical and thermal properties. § CALCULATION METHODS AND CRYSTAL STRUCTURES First-principles calculations based on density functional theory (DFT) are performed in Vienna ab initio simulation package (VASP)<cit.> based on projected augmented wave (PAW)<cit.> and Perdew-Burke-Ernzerhof (PBE) type generalized gradient approximation (GGA)<cit.> exchange-correlation function. The valance wave functions are expanded on plane-wave basis with a 400 eV energy cutoff. In addition, the s semi-core orbital of the A atoms are considered as a valence electron. Spin-orbit coupling (SOC) was considered in all our calculations. For ion relaxation, the absolute magnitude of the force on each atom is reduced to less than 0.02 eV/Å. For AIn_2As_2 with three kinds of space groups [P6_3/mmc (No. 194), R3̅m (No. 166) and P2/m (No. 10)], the Γ-centered Monkhorst-Pack k-point mesh is considered as 11×11×3, 21×21×3, and 5×13×4, respectively. According to report, the structures of CaIn_2As_2 and SrIn_2As_2 are crystallized as EuIn_2P_2 type with P6_3/mmc phase, whereas BaIn_2As_2 is crystallized in the monoclinic EuGa_2P_2 structure type with P2/m phase<cit.>. We further found that the hexagonal structure with R3̅m phase may also exist in these materials. To understand this structural phase difference, pressure was used to systematically study AIn_2As_2 (A = Ca, Sr, Ba) for three space groups. Three structures are obtained by arranging octahedral structural units and In atoms in different ways. For P6_3/mmc phase, A atom occupies the 2a position while In and As occupy the 4f position. The adjacent octahedral lattices of the P6_3/mmc phase, which was labeled as O_1, form a mirror-symmetric alternating stack between them in the z-direction, and two In atomic layers are inserted in between, also mirror-symmetric about the z-direction. The right part of Fig. <ref>(a) gives a schematic diagram of the O_1 octahedral lattice, with the isosceles triangular planes marked in cyan color and the equilateral triangular planes depicted in red color. The Ba-As octahedral of both P6_3/mmc and R3̅m phases are connected by the edges in cyan color in the schematic diagram. For R3̅m phase, A atom occupies the 3a position while In and As occupy the 6c position. The In atomic layers of the R3̅m phase are arranged similarly to P6_3/mmc phase, while the octahedral structural units are arranged along a translational stacking in the T direction as shown by the red arrow in Fig. <ref>(b). However, the structure with P2/m space group with low symmetry is quite different from the previous two. The Ba-As octahedral layer of the P2/m phase consists of a combination of two types of octahedral lattices, O_2 and O_3, as shown in the right part of Fig. <ref>(c). O_2 consists of four isosceles triangles (marked in purple and yellow colors) and four irregular triangles. O_3 consists of four isosceles triangles with different edge lengths, indicated by different colors. Compared to O_1 and O_2, the top view of O_3 has a clear shift of the As and A atoms. As shown in the shaded background part of the octahedral schematic in Fig. <ref>(c), the [101] orientation, one end of the O_3 octahedral lattice is connected to the yellow-colored edge labeled in O_2 through the yellow-colored edge, which is noted as O_2-O_3. And the other end of O_3 is co-edged with the yellow-colored part of O_2 through the labeled green color edge, which is noted as O_2'-O_3. Note in particular that although in the [101] orientation both ends of O_3 are connected to the yellow-colored part of O_2 by co-edges, their edge lengths are not equal and depend on the lengths of the yellow-colored and green-colored parts of O_3, respectively. In the [010] orientation, both O_2 and O_2' layers are spliced via the blue-colored co-edge in the O_2 octahedral lattice, and the O_3 layer is also connected by the blue-colored edge of the O_2 octahedral lattice. Figs. <ref>(a), <ref>(b) and <ref>(c) show the structures of AIn_2As_2, which belongs to the space group P6_3/mmc, R3̅m and P2/m, respectively. The hexagonal structure of the P6_3/mmc and R3̅m phases is composed of alternating [In_2As_2]^2- layers separated by a slab of A^2+ cations. The structure of P2/m phase is also layered and it is composed of different types of polyanions [In_2As_2]^2- units and A^2+ cations. They all exist structural units formed by octahedral with A atoms at the center and As atoms occupying the vertices. The specific structural distinctions are described accordingly in the SM. In short, the valence electron numbers of all three compounds follow the Zintl-Klemm formalism and all elements achieve closed-shell electronic configurations. Lattice parameters reported experimentally are a = 4.148 (4.222) Å and c = 17.726 (18.110) Å for CaIn_2As_2 (SrIn_2As_2) with P6_3/mmc space group and a = 10.275 Å, b = 4.301 Å, c = 13.332 Å, and β = 95.569 degree for P2/m space group. The symmetry generators of P6_3/mmc contain identity operation ℰ, inversion symmetry ℐ, twofold screw rotation axis 𝒢_2z = {C_2z|001/2}, threefold rotation axis C_3z, and the combined rotation axis C_2(110). Slightly different with P6_3/mmc, R3̅m space group with a hexagonal lattice lacks the 𝒢_2z operation but has an additional lattice translation operation T = {x+2/3, y+1/3,z+1/3}. While the P2/m has lower symmetry generators that named twofold screw rotation axis C_2y (unique axis b), identity operation ℰ, and inversion symmetry ℐ. These basic operations will generate a total of 24, 36 (12×3 sets), and four symmetric operations for the P6_3/mmc, R3̅m and P2/m space groups, respectively. § RESULTS AND DISCUSSION §.§ Structural Stability and Structural Phase Transition §.§.§ Dependence of enthalpy on pressure in different structural phases Enthalpy is an important state parameter in thermodynamics that characterizes the energy of a material system. It is equal to the sum of the product of internal energy and pressure and volume and can be expressed as, H = U + pV, where U is the internal energy of the system, p is the pressure of the system, and V is the volume. Thus, we first investigated the enthalpy of different structural phases of AIn_2As_2 under the controling of pressure [see Fig. <ref>(a)-(c)]. In different AIn_2As_2 systems, the enthalpy difference (Δ H) between the two hexagonal phases (P6_3/mmc and R3̅m) under pressure relative to the monoclinic phase (P2/m) has different trends. Since P6_3/mmc and R3̅m have similar crystal structures and symmetry operations, their pressure-dependent enthalpy evolution trends behave approximately the same [see green and red curves in Fig. <ref>(a)-(c)]. The purple dashed lines in Figs. <ref>(a)-(c) mark the approximate values of the transition pressure of the structural phase transition, with the left side of the transition point indicating a more likely formation of the hexagonal phase (P6_3/mmc or R3̅m), while the right region indicates a more likely formation of the monoclinic phase with P2/m space group. For CaIn_2As_2, SrIn_2As_2 and BaIn_2As_2 systems, the phase transition points move toward low pressure, respectively, and BaIn_2As_2 in particular basically tends to exhibit a P2/m phase, which is consistent with the experimentally reported results<cit.>. As summarized in Table <ref>(d), unlike BaIn_2As_2, CaIn_2As_2 and SrIn_2As_2 tend to form hexagonal structured phases at pressures below 10 GPa and 6 GPa, which explains the experimental conclusion that CaIn_2As_2 and SrIn_2As_2 have a different space group structure than BaIn_2As_2. For BaIn_2As_2, a negative-pressure mixed phase (NPMP) with similar energy of the three structural phases will appear at tension stress (negative pressure values), while a P2/m high-pressure phase (HPP) will formed at compressive stress (positive pressure values). Thus, we achieved a series of pressure-dependent structural phase transitions for the AIn_2As_2 systems. The change in hardness of the hexagonal and monoclinic phases under pressure is predicted from Table <ref>. See Appendix <ref> for details. §.§.§ Phonon dispersion spectrum analysis To better illustrate the structural phase transition of AIn_2As_2, we further compare the structural stability of AIn_2As_2 under pressure for different space groups by phonon dispersion spectroscopy calculations. As shown in Fig. <ref>, the phonon and projected density of states (PDOS) calculations show that the AIn_2As_2 systems of the P6_3/mmc space group are all stable structures at both atmospheric pressure and zero-bandgap pressure. The pressure values at which the zero band gap appears in the induced system have been reported in previous study and are 3 GPa, 6.637 GPa, and 10.555 GPa for CaIn_2As_2, SrIn_2As_2, and BaIn_2As_2, respectively<cit.>. And we have shown that the system will undergo a non-trivial to trivial topological transitions at these pressure critical values<cit.>. The lattice waves of the acoustic and optical branches are distinguished in the phonon spectrum by yellow and green curves, respectively. From Figs. <ref>(a)-<ref>(l), the acoustic branching lattice waves of the systems with space group P6_3/mmc have complete degeneracy in the A-L, L-H, and H-A high-symmetry paths. The BaIn_2As_2 of the R3̅m and P2/m space groups don't have fully phonon dispersion degeneracy in any of the Brillouin zone paths we have considered (see Fig. S11 within the SM<cit.>). From the PDOS images in Fig. <ref>, it can be found that the low-frequency parts of CaIn_2As_2 and SrIn_2As_2 are mainly composed of the phonon dispersion of the In element, while the contribution of the Ba element in the BaIn_2As_2 systems are more prominent in the low-frequency phonon dispersion, as the red arrow shown in Figs. <ref>(j) and <ref>(l). Two relatively flat high-frequency phonon dispersions consisting of As and In elements exist for the SrIn_2As_2 and BaIn_2As_2 systems, corresponding to the local peaks in the PDOS diagrams. Similarly, there is a local peak in the 2-3 THz region consisting mainly of A elements. Compared to the atmospheric pressure system, the phonon spectrum of the zero-bandgap pressure system has a broader distribution and spreads to the high-frequency region (see Fig. <ref>). For BaIn_2As_2 structures with different space groups, all have stable phonon characteristics at atmospheric pressure have shown in Figs. <ref> and S11<cit.>. As shown in Fig. S11(b) within the SM<cit.>, the acoustic branching lattice wave has a slight imaginary frequency near Γ, indicating that the structure with R3̅m space group is less stable under the action of 14 GPa than at atmospheric pressure. From Fig. S11(d) within the SM<cit.>, the structure with P2/m space group can still exist stably when 14 GPa is applied. Thus, for the BaIn_2As_2 system, as shown in the enlarged plots in Figs. <ref>(k), and S11(b)<cit.>, and S11(d)<cit.>, it is shown that the hexagonal phase is not stable at high pressure, while the monoclinic phase with the P2/m space group is stable. The phonon spectrum calculation verifies our result that BaIn_2As_2 has a NPMP and a HPP (P2/m space group), as considered from the energy comparison. In a word, our high-pressure calculations realized the structural phase transition of AIn_2As_2 bulk materials. And we further reveal that they are pressure-tunable and can exist stably in a specific pressure range, which is beneficial for the experimentally study. §.§.§ Symmetry transformation of point group Here, we perform a detailed symmetry theory analysis of the structural phase transition. The structure of the fully relaxed P6_3/mmc space group cannot be directly establish a symmetry transition with the P2/m. But we note that the P6_3/mmc structure belongs to the same hexagonal crystal system as R3̅m. Their lattice structures are very similar and only a simple lattice perturbation is required to achieve the structural transformation. Then, the R3̅m phase structure can be transformed into the P2/m phase through a series of symmetry transformations, as shown in Fig. <ref>(d). To understand more deeply the evolutionary mechanism behind the structural phase transition under pressure, we calculate the Raman and Infrared-Raman (IR) activity for these two space groups [see Fig. <ref>(e) and Table <ref>]. The phonon modes at Γ point can be decomposed into different irreducible representations, and the correspondence between the irreducible representations of the two phases is shown in Table <ref>. We utilize the overall transformation matrix T [Eq. (<ref>)] to realize the structural phase transition from R3̅m to P2/m [from Figs. <ref>(a) to <ref>(b)], and then the lattice perturbation to obtain Fig. <ref>(c). T can be obtained by GS× EAN × EEN, and LC = GS × EAN, where the transformation matrices group-subgroup ,element of the affine normalizers,lattice compatible, and element of the euclidean normalizers are represented by GS, EAN, LC, and EEN, respectively. As shown in Table <ref>, the irreducible representations of the two point groups at Γ have a clear correspondence. It is worth noting that A_1u and A_2g of R3̅m are both Raman inactive and IR inactive [see Fig. <ref>(e)]. On the other hand, determining exactly which atoms contribute to these activities will be one of the most important factors influencing the trend of the structural phase transition. As shown in Table <ref>, the Raman activity A_g (B_g) of the P2/m phase is mainly in the m site symmetry group, which can be contributed by In, As, or A atoms at the 2n Wyckoff site. And the IR-active A_u(B_u) can be contributed by any site of atoms. For the R3̅m phase, the Raman activities A_1g and E_g are contributed by In, As atoms only, while the IR activities A_2u and E_u can be contributed by any kind of atoms as well. Furthermore, we note the existence of an intermediate phase C2m for this phase change process. A total of six transformation matrix channels with indices [3 2 2] are available for the conversion of the symmetric operation between these two phases, as shown in Eq. (<ref>). The result obtained by their structure relations of group G=R3̅m and sub-group H=P2/m belongs to a class with the chain R3̅m → C2/m → P2/m → P2/m and index 12. To change the basis of the group general positions is used the transformation matrices P=(P,p) are shown in Eq. (<ref>). The linear part P_i of the transformation P=(P,p) implies the change of basis vectors, and the column p describes the origin shift O'= O + p. And the symmetric operations of group R3̅m (see Table <ref>) and subgroup P2/m (see Table <ref>) can be fully correlated by R_P2/m = Q × R_R3̅m× P, where Q is the inverse transformation of P. According to this relationship, the identity (ε) and inverse (I) symmetry operations with low symmetry can naturally be represented by the corresponding ones with high symmetry. However, C_2y and IC_2y in the P2/m phase can have different R3̅m transition symmetry operations, which are C_2x, IC_2x (P_1 and P_2) or C_2y, IC_2y (P_3 and P_4) or C_2xy, IC_2xy (P_5 and P_6), respectively. For example, the following Eq. (<ref>) gives the C_2y symmetric operation of the P2/m phase based on the P_1 transformation matrix using the C_2x symmetric operation of the R3̅m phase. In conclusion, we achieved the structural phase transition of the AIn_2As_2 system from the hexagonal phase (P6_3/mmc and R3̅m) to the monoclinic phase (P2/m) from the symmetry operation point of view. As summarized by the schematic diagram of the structural phase transition in Fig. <ref>(d), the P6_3/mmc phase can be transformed into the R3̅m phase after a simple octahedral layer dislocation. Then the intermediate phase C2/m and the regular P2/m [Fig. <ref>(b)] are obtained after the symmetry-breaking by the symmetry-operated transformation. Finally, a simple lattice perturbation is required to induce the transformation of the well-aligned P2/m phase into the actual P2/m structure we calculated [Fig. <ref>(c)]. QRP = [ 0 1/2 1; 1 -1/2 0; 0 3/4 0 ][ 1 -1 0; 0 -1 0; 0 0 -1 ][ 0 1 2/3; 0 0 4/3; 1 0 -2/3 ] = [ -1 0 0; 0 1 0; 0 0 -1 ] = C_2y(P2/m) §.§ Performance Change after Structural Phase Transition under Pressure §.§.§ Regulation of crystal anisotropy Based on the elastic constants analyzed in Appendix <ref>, we can get the following results. First, we predict that the hexagonal phase Baln_2As_2 is more compressible in the ab-plane, and the octahedral layer in Fig. <ref>(a) is more susceptible to phase transitions in the ab plane. In contrast, the structural phase transitions of CaIn_2As_2 and SrIn_2As_2 are in the c direction. This difference explains that experimentally BaIn_2As_2 has different structural phases from CaIn_2As_2 or SrIn_2As_2. The monoclinic phase of AIn_2As_2 has a structural phase transition in the a direction, which is manifested by weaker bonding in the a axis and relatively easy stripping in that direction. Immediately after that, we find that the bulk modulus B, shear modulus G, and Young's modulus E of the two-phase structures will be effectively regulated by pressure and show different trends (see Table <ref>). According to the Appendix <ref>, we have described and analyzed the significance of the various moduli of elasticity and the trend of their evolution under pressure. The three-dimensional (3D) figures of various elastic moduli (G, E, linear compression LC) in Supplemental Material show that the anisotropic properties of the different structural phases of AIn_2As_2 differ significantly. Various elastic moduli of hexagonal phase (P6_3/mmc) AIn_2As_2 under no pressure tend to be crystal isotropic, especially for BaIn_2As_2 (see Fig. S1 within the SM<cit.>). As shown in the first two rows of Fig. S2 within the SM<cit.>, CaIn_2As_2 and SrIn_2As_2 remain isotropic in their elastic moduli due to too little pressure. However, at 10.555 GPa, the G, E, and v of the BaIn_2As_2 system shift to anisotropy, and the LC tends to remain isotropic (see the third row of Fig. S2 within the SM<cit.>). In sharp contrast to the hexagonal phase, the monoclinic (P2/m) AIn_2As_2 systems exhibit significant crystal anisotropy under no pressure (see Fig. S3-S5 within the SM<cit.>). Moreover, the pressure will further enhance the anisotropy of the individual elastic moduli of the monoclinic phase AIn_2As_2 system. 2D projections of the pressure-regulated G, E, LC, and v associated with CaIn_2As_2, SrIn_2As_2 and BaIn_2As_2 are presented in the Supporting Material as Figs. S6-S10<cit.>. For a detailed analysis of the anisotropy of these mechanical parameters projected in the xy, yz, and xz directions, see Appendix <ref>. We further compared the bulk anisotropy and plane anisotropy coefficients for each elastic modulus of AIn_2As_2 under pressure (see Fig. <ref>). The anisotropy coefficients of the hexagonal phase (P6_3/mmc) mostly exhibit isotropic features and are distributed around the red dashed line in Fig. <ref>. For G and E of the hexagonal phase, the pressure will somehow enhance their degree of anisotropy. In contrast, the degree of anisotropy of LC and v shows robustness to the pressure. G in hexagonal phase AIn_2As_2 at all pressures and monoclinic phase AIn_2As_2 at 0 GPa exhibit equal bulk and plane anisotropy coefficients [see Fig. <ref>(a)]. The pressure will break the equilibrium of equal bulk and plane anisotropy coefficients for the monoclinic phase, and the enhancement of the bulk anisotropy mainly comes from the two-plane anisotropy enhancement of xy and xz. In contrast, the G anisotropy of the yz plane is robust for pressure, which does not become significantly more extensive due to pressure, as shown by the yellow rectangle of the P2/m phase in Fig. <ref>(a). Similarly, the yz-plane anisotropy of Young's modulus E and linear compression LC in the monoclinic phase AIn_2As_2 do not become much larger under pressure modulation. In contrast, the xy- or xz-plane E and LC anisotropies are an essential reason for the significant increase in the anisotropy of the bulk E and bulk LC [see Figs. <ref>(b) and <ref>(c)]. Of interest is the monoclinic phase system where LC and Poisson's ratio v appears to have a minimum value of 0 or even negative at 26 GPa, resulting in an anisotropy of infinity [see the dashed hollow rectangles in Figs. <ref>(c) and <ref>(d)]. Although the bulk anisotropy coefficients tend to infinity, there are finite anisotropy coefficients (non-infinity) for SrIn_2As_2 (BaIn_2As_2) for LC and v in the xz-plane and xy-plane, respectively. Moreover, they both undergo a dramatic change in anisotropy under pressure modulation, with the same pattern as the 2D analysis above. Their anisotropy can be described by the two anisotropy constants A_U and A_L in Table <ref>, which can be calculated by Eqs. (<ref>) and (<ref>). The values of A_U and A_L illustrate that BaIn_2As_2 in the hexagonal phase is completely isotropic at 0 GPa and that the pressure can substantially enhance the system anisotropy. In addition, the monoclinic phase's anisotropy is stronger than the hexagonal phase's. Calculating the anisotropy constants leads to an assertion consistent with the previous discussion. §.§.§ Realization of negative Poisson's ratio material Poisson's ratio is the opposite of transverse strain to axial strain when a material is tensile or compressive in a particular direction. NPR materials, also known as auxetic materials, have several excellent properties because of their unique mechanical structure, including superior fracture resistance, shear resistance, sound and energy absorption, dent resistance, and surface isotropy <cit.>. Although NPR is allowed by thermodynamics, this property is rare in crystalline solids <cit.>. NPR is mainly studied in 2D materials and structures, and it is crucial to design a 3D multilevel system that can exhibit NPR under deformation<cit.>. It is difficult to find materials that can show a negative Poisson ratio under both pressure and tension, and it is even rarer to find materials or structures that can have the same NPR performance under tension and compression stresses<cit.>. Using pressure modulation, we observe a NPR phenomenon in AIn_2As_2 with low symmetry P2/m phase. In the case of CaIn_2As_2, for example, the system exhibits a generally NPR behavior in the absence of pressure or at low compressive stresses [see Figs. <ref>(b) and <ref>(c)]. At both tensile pressure of -4 GPa [see Fig. <ref>(a)] and compressive pressure of 26 GPa [see Fig. <ref>(d)], the material rarely exhibits NPR property. We predict that this NPR material can be widely used for many applications in medical devices, cushioning and protective equipment, intelligent sensors, and defence industries. §.§.§ Thermal Properties Analysis and Enhance in Phase Transition Temperature Experimentally synthesized BaIn_2As_2 (P2/m) has different structural phases from CaIn_2As_2 and SrIn_2As_2 (P6_3/mmc). The above first-principles calculations based on absolute zero (T= 0 K) conditions give detailed results of the structural phase transition. However, the thermodynamic physical picture of the structural phases at high temperatures is still blurred. Here, we calculate the dependence of thermodynamic parameters on temperature between the hexagonal phase (P6_3/mmc) and the low-symmetry monoclinic phase (P2/m). The specific heat at constant volume C_𝐯, the vibrational entropy S_vib(T), the internal energy U_vib(T), and the Helmholtz free energy F(T) of individual harmonic oscillator and its difference Δ F_P2/m-P6_3/mmc(T) between two phases are given as Eqs. (<ref>)-(<ref>). To investigate the mechanism of the response of the above-mentioned thermal parameters to temperature under pressure, we compared the thermodynamic curves of the two phases P6_3/mmc and P2/m under pressure, as shown in Figs. S13 and S14<cit.>. As the pressure increases, both phases show an increase in free energy F (red curve), a decrease in entropy S (blue curve), and a convergence of the heat capacity C_V to a constant (green curve). As shown by the arrows in the enlarged diagram in the right column of Figs. S13 and S14<cit.>, the intersection of heat capacity and entropy tends to move towards higher temperatures as the pressure increases, except for BaIn_2As_2 in the P2/m phase. This exception may be due to lattice distortion inducing a large phonon dispersion spectrum of imaginary frequencies at Γ (shown in Fig. S12(i) within the SM<cit.>). The phonon frequencies of each system of the P2/m space group corresponding to Fig. S12 within the SM<cit.> at the point Γ are shown in Fig. S20(a) within the SM<cit.>. It is easy to find that CaIn_2As_2 at 0 GPa, BaIn_2As_2 at 16 GPa, and AIn_2As_2 at 26 GPa all have large imaginary frequencies. The acoustic and optical branches for each frequency correspond to the irreducible representation and activity (Raman or IR) are compared in Table S1. Unlike other systems where the acoustic branch consists of A_u+2B_u, the acoustic branch of CaIn_2As_2 has B_g involved in the absence of pressure, and the B_u IR activity is squeezed to the fourth branch (-0.95 cm^-1), leading to the dynamic instability of the system. With the application of pressure, the phonon dispersion spectrum expands and shifts toward high frequencies while CaIn_2As_2 opens a gap near 100 cm^-1. As shown in Fig. S15(a) within the SM<cit.>, the curve of entropy increase indicates that the vibrational entropy favors a monoclinic phase of AIn_2As_2 over a hexagonal phase. The vibrational entropy difference (Δ S_P2/m-P6_3/mmc) between the two phases increases rapidly at low temperatures (≤250 K), and then the trend moderates as temperature increases to 3000 K. To quantitatively analyze the vibrational entropy, we give the temperature-dependent characteristic curves of free energy difference including the vibrational entropy [shown in Fig. S15(b) within the SM<cit.>]. At low temperatures, the free energy difference between the monoclinic phase and the hexagonal phase is positive, implying that the hexagonal phase is relatively stable. When the temperature increases, the vibrational entropy prefers to stabilize the monoclinic phase, which (-T Δ S(T)< 0) becomes large enough to compensate for the 0 K energy difference (Δ E > 0), prompting the free energy difference to become negative (Δ F(T) < 0) and the phase transition from the hexagonal phase to the monoclinic phase occurs. The transition temperatures are 160 K, 156 K, and 148 K for CaIn_2As_2, SrIn_2As_2, and BaIn_2As_2, respectively. The hexagonal and monoclinic phases are low- and high-temperature phases, respectively, which is inconsistent with the experimentally reported high temperature where CaIn_2As_2 and SrIn_2As_2 are hexagonal and BaIn_2As_2 are monoclinic phases<cit.>. This discrepancy may be caused by defects or lattice distortions under high temperature. As shown in Fig. <ref>, pressure can effectively raise the structural phase transition temperature of AIn_2As_2 beyond absolute zero (273 K). The phase transition temperature decreases with the increased ionicity of the A atoms at 0 and 16 GPa, which is related to the strength of the interatomic chemical bonds. In particular, the SrIn_2As_2 system will reach a higher temperature of 324 K at 26 GPa. Our results will provide critical pressure and temperature options for the experimental synthesis of AIn_2As_2 in specific structural phases. Although the heat capacity varies at low temperatures due to pressure (see green curves in Figs. S13 and S14<cit.>), the heat capacity of the same phase eventually converges to the same constant independent of pressure and A elements, satisfying the Dulong-petit limit at high temperatures. To observe the change of heat capacity at high temperatures more clearly, we found that the heat capacity of all systems near 1000 K did not reach Dulong-petit limit (see Fig. S17 within the SM<cit.>). Except for CaIn_2As_2 (P2/m) under no pressure due to the existence of phonon dispersion at imaginary frequencies causing the heat capacity curve to fall below the 16 GPa case, all of them showed the phenomenon of lowering the high temperature heat capacity worth after pressurization. When the temperature increases to 3000 K, CaIn_2As_2 with P2/m phase at 0 GPa still cannot reach Dulong-petit limit and has the situation of leveling off (see Fig. <ref>). The rest system of the absence of pressure can cross Dulong-petit limit. However, as shown by the red dashed line in Fig. <ref>(b), applying a 16 GPa pressure can push the heat capacity curve beyond the Dulong-petit limit. §.§.§ Zero-group velocity behavior of phonon mode softening and Phonon thermal conductivity prediction In order to study in depth the thermal conductivity properties and the sources of thermodynamic instability of the AIn_2As_2 material at high pressures, we calculated phonon group velocities, as shown in Fig. <ref>. Figures <ref>(a)-<ref>(c) demonstrate that the hexagonal phase (P6_3/mmc space group) is both stable in the absence of pressure and at the induced zero bandgap pressure. The pressure induces the group velocity towards lower and higher frequencies, behaving more divergent. In addition, both two phases of AIn_2As_2 show a tendency for the group velocity to become larger in the medium and high-frequency regions with increasing pressure (see Fig. <ref>). Moreover, the low-frequency acoustic branch mainly contributes to the phonon thermal conductivity of all systems. However, compared to the hexagonal phase, the monoclinic phase of AIn_2As_2 has larger group velocities in the low-frequency region, and the group velocities in the medium and high-frequency areas are all roughly distributed in the range of 2-3 km/s. The pressure drives a virtual frequency in the low-frequency region because of the appearance of softened phonon modes, which induces a ZGV [see Figs. <ref>(d)-<ref>(f)]. The phonon thermal conductivity depends on the group velocity with the relation κ = Cv_g^2τ , where τ is the average relaxation time. The thermal conductivity can be initially predicted from v_g^2. Figures S18 and S19<cit.> give images of the frequency dependence of the squared group velocity v_gi^2 (i=x, y, z) in the three directions of the hexagonal and monoclinic phases AIn_2As_2 under pressure. Closely related to the crystal structure, the P6_3/mmc phase has similar group velocity evolution curves in the x and y directions. Therefore, for the P6_3/mmc phase, we refer to the thermal conductivity transported within the octahedral inner layer as the in-plane thermal conductivity. Along the z-direction, we refer to the out-of-plane thermal conductivity. The out-of-plane thermal conductivity of the hexagonal phase AIn_2As_2 is slightly larger than the in-plane thermal conductivity. In the low-frequency region, thermal conduction is more favoured along out-of-plane. However, the fluctuation phenomenon of the out-of-plane thermal conductivity is more pronounced, with zero thermal conductivity behavior in specific frequency regions, and thermal conductivity is frequency selective. Therefore, due to the octahedral lattice's hindrance, the out-of-plane thermal conduction behavior of the hexagonal phase AIn_2As_2 is not as easy. For the monoclinic phase, as shown in Fig. S19 within the SM<cit.> AIn_2As_2 has thermal conduction anisotropy in three directions, and the y direction is the main direction of thermal conduction (for CaIn_2As_2). This is because both the P2/m phase structure along the x and z directions must traverse the A-As octahedral lattice [see the lattice structure in Fig. <ref>(c)]. In contrast, along the y direction, thermal conductivity is possible through the interstices of the octahedral lattice. For the monoclinic phase, the contribution of thermal conduction in the z direction is also significant to a certain extent. In summary, the x-direction is the most difficult direction for thermal conduction in the monoclinic phase, which is related to the smallest C_11 elastic constant (see Fig. <ref>) The pressures all enhance the group velocity for AIn_2As_2 systems, leading to a shift of the group velocity towards high and low frequencies. The ZGV phenomenon resulting from shifting the monoclinic phase phonon spectrum towards lower frequencies under pressure directly reflects the softening of the phonon modes. The phonon frequency distribution of the monoclinic phase under pressure and the structure with atomic sites are given as shown in Fig. S20 within the SM<cit.>. We focus on the CaIn_2As_2 (0, 26 GPa), SrIn_2As_2 (26 GPa), and BaIn_2As_2 (26 GPa) systems that produce significant imaginary frequencies. The group velocity at the imaginary frequency (IFGV) of CaIn_2As_2 under pressure absences is mainly contributed by the In-2n position (red dashed circle in Fig. S20(b) within the SM<cit.>) and the As-2m position (blue dashed circle in Fig. S20(b) within the SM<cit.>). Most of the IFGV of CaIn_2As_2 under 26 GPa originates from not only the atomic contributions from the two Wyckoff sites mentioned above, but also the In-2n site of the cyan colour and the green-coloured As-2n site. For the IFGV of SrIn_2As_2 and BaIn_2As_2 at 26 GPa, there is also a contribution from the A atom in addition to the In and As atom contributions. The IFGV of SrIn_2As_2 under 26 GPa is mainly contributed by the Cyan-coloured In-2n site, the blue-coloured As-2m site and the A_2-1c site. BaIn_2As_2, on the other hand, is primarily contributed by the black-coloured In-2m site, the green-coloured As-2n site, the A_1-1d site and the A_2-1c site. The main contributing atoms to the ZGV phenomenon produced by the softening of the phonon vibrational modes are also presented in Fig. S20(c) within the SM<cit.>. We can clearly find that the phonon mode softening is critically due to atomic vibrations across the zero frequency near the In-In chain ( [101] direction in Fig. S20 within the SM<cit.>). As seen in Table S1<cit.>, for the CaIn_2As_2 (0 GPa) system, the relatively large imaginary frequencies (below -0.08 cm^-1) are mainly contributed by IR-active A_u and Raman-active B_g. For AIn_2As_2 under 26 GPa, on the other hand, the virtual frequencies are all mainly contributed by B_u. Therefore, softening the phonon modes at high pressure weakens the IR vibrational modes, A_u and B_u. It is interesting to note that the strange imaginary frequency of CaIn_2As_2 at 0 GPa also originates from the appearance of the Raman vibrational mode B_g, which should not have appeared in the acoustic branch. We can predict that along between the A-As octahedral layers (In-In atomic gaps) is the direction of maximum probability of phonon softening and thermal conduction in the monoclinic phase AIn_2As_2. § CONCLUSION Based on DFT calculations, we have predicted the structural phase transition of AIn_2As_2 materials under pressure and characterised their mechanical and thermal properties. Firstly, enthalpy of formation and phonon spectroscopy calculations confirm the structural phase transition of AIn_2As_2 under pressure. Moreover, the low-pressure phase of Ca(Sr)In_2As_2 materials is hexagonal, while the high-pressure phase is monoclinic. But BaIn_2As_2 always prefers to form monoclinic phases. Next, we deeply analyse the symmetries of different space groups, propose the structural phase transition path of P6_3/mmc→R3̅m→C2/m→P2/m with C2/m as the intermediate phase, and establish the physical correlation behind the structural phase transition. We then also obtain a variety of elastic moduli based on the elastic stiffness matrix and further analyse the crystal anisotropy, chemical bonding properties, hardness, toughness and other mechanical properties of the P6_3/mmc phase and the P2/m phase AIn_2As_2. Among them, we have deeply investigated the crystal anisotropy transition of AIn_2As_2 series materials based on pressure. Pressure will induce a transition from isotropy to anisotropy in the AIn_2As_2 of the hexagonal phase. Pressure will also enhance the crystal anisotropy in the monoclinic phase. In addition, the bulk anisotropy of these mechanical parameters (G, E, LC, v) depends differently on the plane anisotropy of the xy,yz, and xz planes. We also find that pressure will induce a transition from brittle to ductile in the AIn_2As_2 of the monoclinic and hexagonal phases. And it is found that AIn_2As_2 can be transformed into NPR materials under both compressive and tensile stresses. At the same time, we predict the hardness of different structural phases of AIn_2As_2 that depend on the band gap. On the other hand, we postulate that downward pressure can effectively raise materials' structural phase transition temperature and report their thermal properties such as heat capacity, entropy and free energy. Pressure is favoured to enhance the heat capacity profile of the softened monoclinic CaIn_2As_2 to reach the Dulong-Petit limit. Thus, we determined that Ca(Sr)In_2As_2 is hexagonal at low pressure. BaIn_2As_2 enjoys a monoclinic phase but will be in the NPMP phase with similar energies of the monoclinic and hexagonal phases if stretched. At low temperatures, AIn_2As_2 materials prefer to form the hexagonal phase, but they will transform into a monoclinic phase under high temperatures. Moreover, the pressure is favorable to increase the transition temperature of the structural phase. A theoretical basis is laid for a better study of the thermoelectric properties of AIn_2As_2. In a nutshell, our study confirms the mechanical properties and thermal behavior behind the structural phase transition of this family of materials. § ACKNOWLEDGMENTS We acknowledge the financial support by the National Natural Science Foundation of China (No. 11874113) and the Natural Science Foundation of Fujian Province of China (No. 2020J02018). The work was carried out at National Supercomputer Center in Tianjin,and the calculations were performed on TianHe-1(A). figuresection tablesection equationsection § COMPUTATIONAL DETAILS §.§ Phonon and Thermodynamic Properties Calculation For the phonon calculation, the density functional perturbation theory (DFPT) in PHONOPY<cit.> was applied to combine with VASP in the structures of the P6_3/mmc, R3̅m, and P2/m space groups by the 2×2×1, 2×2×2, and 1×2×1 supercells, respectively. Thermodynamic properties, including heat capacity, internal energy, entropy, and Helmholtz free energy, were calculated by the following equations<cit.>: C_𝐯 =k_B/N_q∑_q, j(ħω_q j/2 k_B T)^2cosech^2(ħω_q j/2 k_B T), U_vib(T) =1/N_q∑_q, jħω_q j[1/e^ħω_q j / k_B T-1+1/2] S_vib(T) =k_B/N_q∑_q, j[/ħω_q j/k_B T(e^ħω_q j/k_B T-1)-ln(1-e^-ħω_q j/k_B T)] F(T) =1/N_q∑_q, j[ħω_q j/2+k_B T ln(1-e^-ħω_q j / k_B T)] Δ F_P2/m-P6_3/mmc(T)=Δ E+Δ U_vib(T)-T Δ S(T). where k_B is the Boltzmann constant, N_q is the number of wave vectors q, and ω_q j is the vibrational frequency of the phonon mode qj. Δ in equation (<ref>) denotes each physical parameter difference, where ΔE is the total energy difference calculated by VASP. §.§ Mechanical Properties Characterization §.§.§ Elastic Moduli and Mechanical Stability Criteria The elastic modulus formulas for the hexagonal and monoclinic phases are from Ref. <cit.> and Ref. <cit.>, respectively. The mechanical stability criterion is from Ref. <cit.>. The Voigt Reuss-Hill<cit.> approximation is the arithmetic mean of the Voigt<cit.> and Reuss bounds<cit.>. B denotes the bulk modulus, G denotes the shear modulus, E denotes Young's modulus, and v denotes Poisson's ratio. According to the Voigt-Reuss-Hill approximation<cit.>, X_H = (1/2)(X_R+X_V), X = B, G. Furthermore, Young's modulus E and Poisson's ratio v are derived from Eq. (<ref>): E = 9BG/3B + G, v = 3B-2G/6B+2G. The independent elastic stiffness constants C_ij of hexagonal phase include C_11, C_33, C_44, C_12, and C_13. The modulus can be described as follows: B_V=1/9[2(C_11+C_12)+4 C_13+C_33], G_V=1/30(M+12 C_44+12 C_66), B_R=C^2/M, G_R=5/2(C^2 C_44 C_66)/3 B_V C_44 C_66+C^2(C_44+C_66), where M=C_11+C_12+2 C_33-4 C_13, C^2=(C_11+C_12) C_33-2 C_13^2. The mechanical stability criteria are given via C_44>0, C_11>|C_12|, (C_11+2 C_12) C_33>2 C_13^2 . As for monoclinic phase, the independent C_ij can be indicated to C_11, C_22, C_33, C_44, C_55, C_66, C_12, C_13, C_23, C_15, C_25, C_35, and C_64. The modulus can be described as follows: B_V=1/9[C_11+C_22+C_33+2(C_12+C_13+C_23)], G_V= 1/15[C_11+C_22+C_33+3(C_44+C_55+C_66)- (C_12+C_13+C_23)]. B_R= Ω[a(C_11+C_22-2 C_12)+b(2 C_12-2 C_11-C_23) +c(C_15-2 C_25)+d(2 C_12+2 C_23-C_13-2 C_22) +2 e(C_25-C_15)+f]^-1, G_R= 15{[4a(C_11+C_22+C_12)+b(C_11-C_12-C_23)+ c(C_15+C_25)+d(C_22-C_12-C_23-C_13)+ e(C_15-C_25)+f]/Ω+3[g/Ω+C_44+C_66/C_44 C_66-C_64^2]}^-1, a= C_33 C_55-C_35^2,b=C_23 C_55-C_25 C_35, c= C_13 C_35-C_15 C_33, d=C_13 C_55-C_15 C_35, e= C_13 C_25-C_15 C_23, f= C_11(C_22 C_55-C_25^2)-C_12(C_12 C_55-C_15 C_25) +C_15(C_12 C_25-C_15 C_22)+C_25(C_23 C_35-C_25 C_33), g= C_11 C_22 C_33-C_11 C_23^2-C_22 C_13^2-C_33 C_12^2+2 C_12 C_13 C_23, Ω= 2[C_15 C_25(C_33 C_12-C_13 C_23)+C_15 C_35(C_22 C_13- C_12 C_23+C_25 C_35(C_11 C_23-C_12 C_13)]-[C_15^2(C_22 C_33- C_23^2)+C_25^2(C_11 C_33-C_13^2)+C_35^2(C_11 C_22-C_12^2)]+g C_55 . The criteria for mechanical stability are given via C_11>0, C_22>0, C_33>0, C_44>0, C_55>0, C_66>0, [C_11+C_22+C_33+2(C_12+C_13+C_23)]>0, (C_33 C_55-C_35^2)>0, (C_44 C_66-C_46^2)>0, (C_22+C_33-2 C_23)>0, [C_22(C_33 C_55-C_35^2)+2 C_23 C_25 C_35-C_23^2 C_55 -C_25^2 C_33]>0, {2 [C_15 C_25(C_33 C_12-C_13 C_23)+C_15 C_35(C_22 C_13-C_12 C_23) +C_25 C_35(C_11 C_23-C_12 C_13)]-[C_15^2(C_22 C_33-C_23^2) +C_25^2(C_11 C_33-C_13^2)+C_35^2(C_11 C_22-C_12^2)]+gC_55}>0 . §.§.§ Crystal anisotropy calculation Since Zener anisotropy<cit.> and Chung-Buessem anisotropy<cit.> indices are only applicable to cubic crystals, we used the universal anisotropy index A_U<cit.> and log-Euclidean anisotropy index A_L<cit.> for the anisotropy analysis of P6_3/mmc and P2/m phases. A_U takes into account all the stiffness coefficients to define the anisotropy, exploiting the tensor nature of the elastic stiffness. The specific expression is shown in Eq. (<ref>). The expression for A_L with respect to the modulus of elasticity is given by Eq. (<ref>). A_U=5 G_V/G_R+B_V/B_R-6 A_L =√([ln(B_V/B_R)]^2 + 5[ln(G_V/G_R)]^2) The value of the anisotropy parameter (A_U and A_L) is ≥ 0. They characterize the strength of the crystal anisotropy, and their convergence to zero implies that crystal isotropy. §.§.§ Bonding information calculation The Kleinman parameter (ξ) allows evaluation of the stability of the solid under stretching or bending<cit.>, which is defined as: ξ=C_11+8C_12/7C_11-2C_12 ξ = 0 and 1 imply that bond bending and stretching will be dominated, respectively. The Cauchy pressure (P_C) can also be used to describe the brittleness and ductility of a metal or compound. For hexagonal crystal systems, it is defined as P_C^a=C_13-C_44 and P_C^b=C_12-C_66<cit.>. §.§.§ Hardness prediction First-principles calculations provide a good assessment of the various mechanical properties of a solid. However, DFT does not give a reasonable evaluation of hardness directly. We predict hardness based on the following semi-empirical relationships to describe the mechanical behavior of AIn_2As_2 fullyfully<cit.>, H_1a=0.1475 G, H_1b=0.0607 E <cit.>, H_2=0.1769 G-2.899 <cit.>, H_3=0.0635 E <cit.>, H_4=(1-2v) B/6(1+v) <cit.>, H_5=2(G^3/B^2)^0.585-3 <cit.>. In addition, Ivanovskii is well-placed to summarize these semi-experiences<cit.>. Furthermore, Sobhit Singh calculated the hardness of various materials and compared it with experimental data to choose the semi-empirical calculation of the most appropriate hardness based on the material's space group and band gap<cit.>(see Table <ref>). ELATools<cit.>, MechElastic<cit.> and ELATE<cit.> programs were used for the calculation of mechanical parameters and visualization of the modulus. § ADDITIONAL RESULTS §.§ Evolution of Lattice Parameters under Pressure The application of pressure will first directly change the lattice parameters of the material. The R3̅m and P6_3/mmc space groups, which also belong to the hexagonal crystal system, have similar pressure-dependent lattice parameter evolution patterns. As the reported earlier, the lattice constants of AIn_2As_2 materials with the P6_3/mmc space group both decrease with increasing pressure. In addition, the bond angles of the hexagonal crystal system are robust to pressure, always maintaining α = 90, β = 90, γ = 120. However, the bond angle β of monoclinic crystal systems is very sensitive to pressure. As shown in Fig. <ref>(d), the bond angles of the three systems first decrease then increase with increasing pressure, especially for the SrIn_2As_2 and BaIn_2As_2 systems, where this evolution regular is more obvious. The lattice constants a,b, and c all show a general trend of becoming smaller with the applied positive pressure [see Fig. <ref>(a)-<ref>(c)]. The lattice constant a is almost linear with pressure, and the lattice constants b and c are gradually decreasing curves. It is important to note that the lattice constant b of BaIn_2As_2 becomes larger again under high pressure. As shown in Fig. <ref>(e), the volume-pressure curve fully reflects the effect of pressure. As with the lattice parameters, the volume shows a positive correlation with the radius of the A atom (BaIn_2As_2 is the largest and CaIn_2As_2 the smallest). As shown in Fig. <ref>(f), the band gaps of all three AIn_2As_2 systems show a trend of increasing and then decreasing under pressure, and all have a band gap maximum around 10 GPa. Narrow band gaps are often accompanied by the non-trivial topological properties of band inversion. §.§ Symmetry Transformation of Point Group For the space group R3̅m with point group D_3d (-3m): Γ_acoustic = A_2u + E_u Γ_optic = 2A_1g + 2A_2u + 2E_g + 2E_u In total, there are 15 vibrational modes, 5 nondegenerate A_1g and A_2u modes, and 5 doubly degenerate E_g and E_u modes. Among them, optical vibrations 2A_1g + 2E_g are Raman (R) active, while optical modes 2A_2u + 2E_u are infrared Raman (IR) active. R(A_1g)=[ a d 0; d a 0; 0 0 b ] ;R(E_g,1)=[ c 0 0; 0 -c d; 0 d 0 ] and R(E_g,2)=[ 0 -c -d; -c 0 0; -d 0 0 ] For the space group P2/m with point group C_2h (2/m): Γ_acoustic = A_u + 2B_u Γ_optic = 18A_g + 10A_u + 9B_g + 20B_u In total, there are 60 vibrational modes, all of them are nondegenerate A_g, B_g, A_u and B_u modes. Here, the optical vibration 10A_u + 20B_u is infrared (IR) active, while the optical mode 18A_g + 9B_g is Raman (R) active. The corresponding mode activity and symmetry at the Γ point are shown in Table <ref>. R(A_g)=[ a d 0; d b 0; 0 0 c ] ;R(B_g)=[ 0 0 e; 0 0 f; e f 0 ] T= [ -2/3 -1 -4/3 -1/3; 2/3 -1 4/3 1/3; -1/3 0 1/3 -1/6 ] GS= [ 0 -1 2/3 0; 0 -1 -2/3 0; 1 0 -2/3 0 ] ; EAN= [ -1 0 -1; 0 1 0; -1 0 -2; 0 0 0 ]; LC= [ -2/3 -1 -4/3; 2/3 -1 4/3; -1/3 0 -1/3 ] ; EEN= [ 1 0 0 1/2; 0 1 0 0; 0 0 1 0 ]. The structural transformation is performed in Bilbao Crystallographic Server<cit.>. The chain of transformation relations from the R3̅m to the P2/m structure includes three transformation matrices channels (P_i,p)(i=1-6) [see Eq. (<ref>)]. These matrices achieve the symmetric operational transformation from the hexagonal to the monoclinic phase. §.§ Mechanical Properties Characterisation §.§.§ Calculation of elastic constants Stress and strain tend to change the elastic tensor information of the solid materials, so it is crucial to study the mechanical properties of materials under pressure, such as Young's modulus, shear modulus, p-wave modulus, Poisson's ratio, anisotropy index, Kleinman's parameter, Cauchy pressure, Pugh's ratio, and hardness information. Our calculated results under all pressures satisfy the criteria for mechanical stability in Appendix <ref>, representing that all AIn_2As_2 systems are mechanically stable. We calculated the elastic constants for the two phases at different pressures as shown in Table <ref>. C_11, C_22 and C_33 denote the linear compression resistance along the a-, b- and c-axes, respectively. For hexagonal phase, C_11=C_22≠ C_33 and C_33 are smaller than C_11 for all systems except BaIn_2As_2, indicating that the c-axis is more compressible than the a-axis and b-axis, which also reflects the weaker chemical bonding in the c-axis than the a-axis and b-axis. In contrast, C_33 is larger than C_11 in BaIn_2As_2 of the hexagonal phase resulting in the c-axis being more incompressible than the a(b) axis, indicating that the c-directional chemical bonding of BaIn_2As_2 is more stable than the a(b)-directional. This easily compressible direction evaluates the maximum probability direction of the structure phase transition. Due to the weaker bonding in the ab plane, the octahedral layers in Fig. <ref>(a) are more easily deformed within the layers than between them. This anomalous behavior of the hexagonal phase BaIn_2As_2 compared to CaIn_2As_2, SrIn_2As_2 perfectly explains the previous experimental result for their different structural phases. As can be seen from Fig. <ref>(a), the bonding strengths differences between the a(b) axis and c axis of CaIn_2As_2 and SrIn_2As_2 are positive, while the bonding in the in-plane (C_11 and C_22) of BaIn_2As_2 is weaker than that in the out-plane (C_33) (about -3.5 GPa). This result implies that CaIn_2As_2 and SrIn_2As_2 are prone to structural phase transitions in the c-direction, while BaIn_2As_2 is prone to structural phase transitions in the in-plane. For the monoclinic phase [see Table <ref> and Fig. <ref>(a)], satisfying C_11≠ C_22≠ C_33, C_11 is smaller than C_22 and C_33 for all systems, and the difference Δ_C_11-C_ij (i=j=2,3) becomes more significant as the pressure is applied except for BaIn_2As_2 which becomes smaller under 26 GPa. Without pressure, C_22 of CaIn_2As_2 is maximum while C_33 of SrIn_2As_2 and BaIn_2As_2 is maximum. With applying pressure, C_22 and C_33 compete, CaIn_2As_2 becomes maximum at 16 GPa for C_33 while SrIn_2As_2 and BaIn_2As_2 reverse to the maximum at 26 GPa for C_22. In conclusion, the monoclinic phase of AIn_2As_2, especially after applying pressure, has weak bonding in the a-axis, and it is relatively easy to peel in that direction. §.§.§ Elastic modulus analysis The bulk modulus B is a physical measure of the material's ability to resist compression: the more significant the B, the more excellent the resistance to compression and the smaller the compressibility. The B of Hill approximation is related to B_V and B_R, and the B of both two phases can be explicitly calculated by Eqs. (<ref>),(<ref>), (<ref>), (<ref>). As shown in Table <ref>, the B of the monoclinic phase is generally smaller and more compressible than the hexagonal phase, which is related to the low symmetry structure of the monoclinic. The bulk modulus of the hexagonal phase obtained from the elastic constants remarkably agrees with the fit of the B-M equation reported in our previous work<cit.>. The bulk moduli of the present paper (Ref. <cit.>) are 45.779 GPa (46.3 GPa), 43.437 GPa (43.8 GPa), and 41.077 GPa (41.7 GPa) for the hexagonal phases CaIn_2As_2, SrIn_2As_2, and BaIn_2As_2, respectively. Our results also show that pressure can effectively enhance the resistance to compression of both two phases, which can be explained by the decrease in lattice parameters after compression (see Figs. <ref>(a)-<ref>(e) and Fig. S2 in Ref. <cit.>). The shear modulus G reflects the ratio of stress to strain under shear deformation. The larger the G, the greater the resistance to shear deformation. G can also be calculated from Eqs. (<ref>), (<ref>), (<ref>), (<ref>). The relationship between the two structural phases of G and the trend of change under pressure is similar to that of B. The applied pressure can enhance the shear deformation resistance of most of the systems. However, it reduced again that the shear deformation resistance of CaIn_2As_2 at 26 GPa and BaIn_2As_2 (both two phases) under higher pressures. Without pressure, B and G of both space groups decrease as the atomic number of A increases. Young's modulus E is an important index to characterize the stiffness of solid materials reflecting the system's resistance to elastic deformation. Poisson's ratio v reflects the stability of the solid against shear deformation. They can be calculated from B and G by Eq. (<ref>). Also given by Table <ref>, the variation regular of E under pressure is consistent with G and B for the P2/m phase and P6_3/mmc phase, respectively. Poisson's ratio v is stable at -1∼0.5 under linear elastic shear deformation. Based on the data in Table <ref>, we can quickly determine that v is positive and within the stability range, again proving that all systems are mechanically stable. §.§.§ Pressure affects crystal anisotropy results In order to visualize the effect of pressure on each elastic modulus, we calculated their 2D projections in a specific plane (see Figs. S6-S10<cit.>). The hexagonal phase (see Figs. S6 and S7<cit.>) tends to be more isotropic than the monoclinic phase due to the higher symmetry and the neater octahedral lattice. Especially with the most significant isotropy for each BaIn_2As_2 mechanics without pressure (compare with Figs. S6 and S10 within the SM <cit.>). When the pressure modulates AIn_2As_2 as a zero-bandgap solid material, the linear compression maintains isotropy. Since the pressure values of induced zero bandgaps for CaIn_2As_2 and SrIn_2As_2 are weak, the variation of each mechanical quantity is not significantly different from that under no pressure. However, we can see that a pressure of 10.555 GPa will induce the change of G, E, v of BaIn_2As_2 from isotropic to anisotropic. For CaIn_2As_2 in the monoclinic phase (see Fig. S8 within the SM<cit.>), the following conclusions can be drawn as the pressure increases: 1. The anisotropy of the G-minimum positive (green curve) and E-maximum positive increases, especially in the xy(001) and xz(010) planes. This phenomenon is because the difference between C_22 (C_33) and C_11 increases sharply under pressure. 2. The maximum positive value of linear compression changes from almost isotropic to polarized in the x-direction (a-axis). The maximum positive value in the yz-plane disappears due to the pressure effect. In contrast, the minimum negative value polarization along the z-direction (c-axis) appears under 26 GPa pressure (red curve). 3. At 26 GPa, the minimum NPR phenomenon occurs (red curve in Fig. S8 within the SM<cit.>). For SrIn_2As_2 and BaIn_2As_2 in the monoclinic phase of Figs. S9 and S10 within the SM<cit.>, the more obvious difference is that the pressure-induced linear compression at 26 GPa has a minimum negative value along the y-direction (b-axis), but not z-direction. Second, the maximum value of linear compression and the minimum positive value of v (green curve) are observed in the yz plane under 26 GPa, which are not visible in CaIn_2As_2. Also, the anisotropy of the maximum positive value of v (blue curve) for the CaIn_2As_2 and SrIn_2As_2 regimes at 26 GPa is weaker than that of BaIn_2As_2 in yz plane. Furthermore, we calculate the 3D space-dependent mechanical quantities (G, E, B and v) for the two phases as BaIn_2As_2 (see Figs. S1-S5<cit.>). It can be visualized that the AIn_2As_2 of the intrinsic hexagonal phase is indeed highly isotropic, while the monoclinic phase exhibits anisotropy. §.§.§ Chemical bonding, brittleness and hardness prediction The Pugh's ratio (G/B) or B/G ratio defines the ductility or brittleness of a solid. With B/G= 1.75 as the threshold value, a material with B/G1.75 is considered ductile, while the opposite is considered brittle. From Table<ref>, we can find that the AIn_2As_2 system without pressure behaves as brittle, while B/G increases and transforms into ductile after applying pressure. The ξ parameter evaluates whether the material is bending-dominated or stretch-dominated. When the value is close to 0, the system is bending-dominated, and close to 1, it is tensile-dominated. As seen in Table<ref>, the monoclinic phase has a larger ξ than the hexagonal phase, and the larger the A atomic number, the larger the ξ. Moreover, all the AIn_2As_2 systems we consider, whether pressurized or not, exhibit stretching dominance. As demonstrated in Table <ref>, Cauchy's pressure P_C can effectively assess the type of chemical bonding. The monoclinic phase AIn_2As_2 tends to bond in a metallic manner MLB, and the strength of this bonding is proportional to the ionization energy of the A ion. Moreover, the pressure favors the enhancement of the metallic character of the system. The hexagonal phase of AIn_2As_2 has both Cauchy's pressures (P_C^a and P_C^c) negative in the absence of pressure, indicating a tendency to form covalent bonds CLB. In addition, the pressure will reverse the sign of Cauchy's pressure, and the bonding style changes to a metallic bonding-dominated situation. Hardness can adequately describe the mechanical behavior of solids and is one of the critical factors in practical production processes. We evaluate the hardness of AIn_2As_2 in different states according to the six semi-empirical formulas of Eq. (<ref>) and the judgment guide of Table <ref>. The Vickers hardness is calculated from H_1b or H_3 for the hexagonal phase of AIn_2As_2 with P6_3mmc space group, semiconductors at 0 GPa (0 E_g 2 eV). As shown in the orange and red curves in Fig. <ref>, they are higher than other calculations. When applying a pressure that induces a zero band gap (E_g=0), the crystal hardness tends to be expressed by H_4, with a reduced hardness (green curve). For the monoclinic phase of the general case, AIn_2As_2 is a semiconductor at 0 GPa and 16 GPa and becomes metallic at 26 GPa. Again, from the results of Table <ref> and Fig. <ref>, we know that the H_5 equation can represent the hardness of the system at 0 GPa and 16 GPa, while H_4 describes the hardness of the system at 26 GPa. In the absence of pressure, the monoclinic phase of AIn_2As_2 has the maximum hardness (indicated by the grey curve). With a pressure of 16 GPa, the hardness is still expressed by H_5, but the hardness decreases by almost half, especially for BaIn_2As_2. At a pressure of 26 GPa, the hardness of the system increases again and is expressed by H_4. We suggest that the change in hardness may have a necessary relationship to the structural phase transition. Overall, our predicted hardness of the AIn_2As_2 material is not high, well below the experimental 96 GPa for diamond<cit.>, but close to that of ZnO (7.2 GPa)<cit.>, which is also a hexagonal phase. In order to observe more comprehensively the effect of pressure on the overall hardness of the crystal, we calculated 3D hardness distributions for BaIn_2As_2 as an example (see Figs. <ref>). The left and right columns of Fig. <ref> show the hardness distributions of monoclinic phase BaIn_2As_2 at 0 GPa and 26 GPa, respectively. The hardness distribution is symmetric about the x-axis when no pressure is applied [see Fig. <ref>(a)], and the symmetry of the hardness distribution is broken when 26 GPa pressure is applied [see Fig. <ref>(d)]. Such an asymmetric transition can be observed more clearly in the side (yz-plane) of Figs. <ref>(b) and <ref>(e). The range of coordinates and the intensity of the contours in Figs. <ref>(a) and <ref>(d) allow determining that the maximum hardness of the 0 GPa crystal is higher than 26 GPa, consistent with the hardness relationship predicted quantitatively earlier. In addition, it can be found that the pressure application induces a shift in the hardness distribution of BaIn_2As_2 from clustering in the center of the crystal to dispersion in the y-direction. Figures <ref>(c) and <ref>(f) show the projection of hardness in the xy plane. A comparison of the localized peak in Fig. <ref>(c) with the "fishtail" hardness relationship in Fig. <ref>(f) shows that pressure does weaken the hardness localization. §.§.§ Analysis of the degree of ΔF-T linear correlation We are concerned that at high temperatures (≥ 2500 K), the free energy difference of the pressure-absent AIn_2As_2 system exhibits almost a linear decrease with temperature and has different slopes (see the enlarged figure in the upper right of Fig. S15(b) within the SM<cit.>). The temperature dependence of the energy difference between the two structural phases changes from parabolic to linear, and this monotonically decreasing relationship indicates that the phase transition from hexagonal to monoclinic has fully realised at ultrahigh temperatures. The degree of tilt of the curve depends on the A atomic radius size. To better illustrate the linearity, we performed an error analysis of the slopes over the full range (0-3000 K) with the high-temperature linear slopes of the three systems, and the results are shown in Fig. S16 within the SM<cit.>. From the marked slope errors of 5% of the temperature values (2400 K, 2450 K, and 2420 K for CaIn_2As_2, SrIn_2As_2 and BaIn_2As_2, respectively), it can be seen that, within the error tolerance, the three systems show a linear slope after temperatures above 2500 K. The slope is linear for all three systems after the temperature above 2500 K within the error tolerance.